Compare commits

...

80 Commits

Author SHA1 Message Date
a51b041621 Upgrade to Kubernetes 1.24 and Longhorn 1.4.0 2023-02-20 11:16:12 +02:00
1d6cf0a521 camtiler: Restore cams on members site 2023-01-25 09:55:04 +02:00
19d66801df prometheus-operator: Update node-exporter and add pve2 2023-01-07 10:27:05 +02:00
d2a719af43 README: Improve cluster formation docs
- Begin code block with sudo to remind the following shall be ran as root.
- Remove hardcoded key, instead copy from ubutnu user.
2023-01-03 16:09:17 +00:00
34369d211b Add nyancat server 2023-01-03 10:25:08 +02:00
cadb38126b prometheus-operator: Prevent scrape timeouts 2022-12-26 14:15:05 +02:00
414d044909 prometheus-operator: Less noisy alerting from node-exporter 2022-12-24 21:11:00 +02:00
ea23a52d6b prometheus-operator: Remove bundle.yml 2022-12-24 21:07:07 +02:00
3458cbd694 Update README 2022-12-24 21:01:49 +02:00
0a40686c16 logmower: Remove explicit command for event source 2022-12-24 00:02:01 +02:00
222fca8b8f camtiler: Fix scheduling issues 2022-12-23 23:32:18 +02:00
75df3e2a41 logmower: Fix Mongo affinity rules 2022-12-23 23:31:10 +02:00
5516ad195c Add descheduler 2022-12-23 23:30:39 +02:00
d0ac3b0361 prometheus-operator: Remove noisy kube-state-metrics alerts 2022-12-23 23:30:13 +02:00
c7daada4f4 Bump kube-state-metrics to v2.7.0 2022-12-22 20:04:05 +02:00
3a11207783 prometheus-operator: Remove useless KubernetesCronjobTooLong alert 2022-12-21 14:59:16 +02:00
3586309c4e prometheus-operator: Post only critical alerts to Slack 2022-12-21 14:13:57 +02:00
960103eb40 prometheus-operator: Bump bundle version 2022-12-21 14:08:23 +02:00
34b48308ff camtiler: Split up manifests 2022-12-18 16:28:45 +02:00
d8471da75f Migrate doorboy to Kubernetes 2022-12-17 17:49:57 +02:00
3dfa8e3203 camtiler: Clean ups 2022-12-14 19:50:55 +02:00
2a8c685345 camtiler: Scale down motion detectors 2022-12-14 18:58:32 +02:00
bccd2c6458 logmower: Updates 2022-12-14 18:56:08 +02:00
c65835c6a4 Update external-dns 2022-12-14 18:46:00 +02:00
76cfcd083b camtiler: Specify Mongo collection for event source 2022-12-13 13:10:11 +02:00
98ae369b41 camtiler: Fix event broker image name 2022-12-13 12:51:52 +02:00
4ccfd3d21a Replace old log viewer with Logmower + camera-event-broker 2022-12-13 12:43:38 +02:00
ea9b63b7cc camtiler: Dozen updates 2022-12-12 20:37:03 +02:00
b5ee891c97 Introduce separated storage classes per workload type 2022-12-06 09:06:07 +02:00
eccfb43aa1 Add rawfile-localpv 2022-12-02 00:10:04 +02:00
8f99b1b03d Source meta-operator from separate repo 2022-11-13 07:19:56 +02:00
024897a083 kube-system: Record pod labels with kube-state-metrics 2022-11-12 17:52:59 +02:00
18c4764687 prometheus-exporter: Fix antiaffinity rule for Mikrotik exporter 2022-11-12 16:50:31 +02:00
7b9cb6184b prometheus-operator: Reduce retention size 2022-11-12 16:07:42 +02:00
9dd32af3cb logmower: Update shipper arguments 2022-11-10 21:07:54 +02:00
a1cc066927 README: Bump sysctl limits 2022-11-10 07:56:13 +02:00
029572872e logmower: Update env vars 2022-11-09 11:49:13 +02:00
30f1c32815 harbor: Reduce logging verbosity 2022-11-05 22:43:00 +02:00
0c14283136 Add logmower 2022-11-05 20:55:52 +02:00
587748343d traefik: Namespace filtering breaks allowExternalNameServices 2022-11-04 12:20:30 +02:00
1bcfbed130 traefik: Bump version 2022-10-21 08:30:04 +03:00
3b1cda8a58 traefik: Pull resources only from trusted namespaces 2022-10-21 08:27:53 +03:00
2fd0112c28 elastic-system: Exclude logging ECK stack itself 2022-10-21 00:57:11 +03:00
9275f745ce elastic-system: Remove Filebeat's dependency on Kibana 2022-10-21 00:56:54 +03:00
3d86b6acde elastic-system: Bump to 8.4.3 2022-10-14 20:18:28 +03:00
4a94cd4af0 longhorn-system: Remove Prometheus annotation as we use PodMonitor already 2022-10-14 15:03:48 +03:00
a27f273c0b Add Grafana 2022-10-14 14:38:23 +03:00
4686108f42 Switch to wildcard *.k-space.ee certificate 2022-10-14 14:32:36 +03:00
30b7e50afb kube-system: Add metrics-server 2022-10-14 14:23:21 +03:00
e4c9675b99 tigera-operator: Remove unrelated files 2022-10-14 14:05:40 +03:00
017bdd9fd8 tigera-operator: Upgrade Calico 2022-10-14 14:03:34 +03:00
0fd0094ba0 playground: Initial commit 2022-10-14 00:14:35 +03:00
d20fdf350d drone: Switch templates to drone-kaniko plugin 2022-10-12 14:24:57 +03:00
bac5040d2a README: access/auth: collapse bootstrapping
For 'how to connect to cluster', server-side setup
is not needed from connecting clients.
Hiding the section makes the steps more concise.
2022-10-11 10:47:41 +03:00
Danyliuk
4d5851259d Update .gitignore file. Add IntelliJ IDEA part 2022-10-08 16:43:48 +00:00
8ee1896a55 harbor: Move to storage nodes 2022-10-04 13:39:25 +03:00
04b786b18d prometheus-operator: Bump blackbox exporter replica count to 3 2022-10-04 10:11:53 +03:00
1d1764093b prometheus-operator: Remove pulled UPS-es 2022-10-03 10:04:24 +03:00
df6e268eda elastic-system: Add PodMonitor for exporter 2022-09-30 10:33:41 +03:00
00f8bfef6c elastic-system: Update sharding, enable memory-mapped IO, move to Longhorn 2022-09-30 10:21:10 +03:00
109859e07b elastic-system: Reduce replica count for Kibana 2022-09-28 11:01:08 +03:00
7e518da638 elastic-system: Make Kibana healthcheck work with anonymous auth 2022-09-28 11:00:38 +03:00
5ef5e14866 prometheus-operator: Specify priorityClassName: system-node-critical for node-exporters 2022-09-28 10:33:44 +03:00
310b2faaef prometheus-operator: Add node label to node-exporters 2022-09-28 09:32:31 +03:00
6b65de65d4 Move kube-state-metrics 2022-09-26 15:50:58 +03:00
02d1236eba elastic-system: Add Syslog ingestion 2022-09-23 16:37:29 +03:00
610ce0d490 elastic-system: Bump version to 2.4.0 2022-09-23 16:16:22 +03:00
051e300359 Update tech mapping 2022-09-21 17:12:24 +03:00
5b11b7f3a6 phpmyadmin: Use 6446 for MySQL Operator instances 2022-09-21 11:38:13 +03:00
546dc71450 prometheus-operator: Fix SNMP for older HP printers 2022-09-20 23:26:09 +03:00
26a35cd0c3 prometheus-operator: Add snmp_ prefix 2022-09-20 17:09:26 +03:00
790ffa175b prometheus-operator: Fix Alertmanager integration 2022-09-20 12:22:49 +03:00
9a672d7ef3 logging: Bump ZincSearch memory limit 2022-09-18 10:05:54 +03:00
d1cb00ff83 Reduce Filebeat logging verbosity 2022-09-17 08:06:42 +03:00
9cc39fcd17 argocd: Add members repo 2022-09-17 08:06:19 +03:00
ae8d03ec03 argocd: Add elastic-system 2022-09-17 08:05:47 +03:00
bf9d063b2c mysql-operator: Bump to version 8.0.30-2.0.6 2022-09-16 08:41:07 +03:00
2efaf7b456 mysql-operator: Fix network policy 2022-09-16 08:40:31 +03:00
c4208037e2 logging: Replace Graylog with ZincSearch 2022-09-16 08:34:53 +03:00
edcb6399df elastic-system: Fixes and cleanups 2022-09-16 08:24:13 +03:00
83 changed files with 20277 additions and 32202 deletions

4
.gitignore vendored
View File

@ -3,3 +3,7 @@
*.swp
*.save
*.1
### IntelliJ IDEA ###
.idea
*.iml

133
README.md
View File

@ -23,6 +23,7 @@ Most endpoints are protected by OIDC autentication or Authelia SSO middleware.
General discussion is happening in the `#kube` Slack channel.
<details><summary>Bootstrapping access</summary>
For bootstrap access obtain `/etc/kubernetes/admin.conf` from one of the master
nodes and place it under `~/.kube/config` on your machine.
@ -46,9 +47,9 @@ EOF
sudo systemctl daemon-reload
systemctl restart kubelet
```
</details>
Afterwards following can be used to talk to the Kubernetes cluster using
OIDC credentials:
The following can be used to talk to the Kubernetes cluster using OIDC credentials:
```bash
kubectl krew install oidc-login
@ -89,28 +90,41 @@ EOF
For access control mapping see [cluster-role-bindings.yml](cluster-role-bindings.yml)
### systemd-resolved issues on access
```sh
Unable to connect to the server: dial tcp: lookup master.kube.k-space.ee on 127.0.0.53:53: no such host
```
```
Network → VPN → `IPv4` → Other nameservers (Muud nimeserverid): `172.21.0.1`
Network → VPN → `IPv6` → Other nameservers (Muud nimeserverid): `2001:bb8:4008:21::1`
Network → VPN → `IPv4` → Search domains (Otsingudomeenid): `kube.k-space.ee`
Network → VPN → `IPv6` → Search domains (Otsingudomeenid): `kube.k-space.ee`
```
# Technology mapping
Our self-hosted Kubernetes stack compared to AWS based deployments:
| Hipster startup | Self-hosted hackerspace | Purpose |
|-----------------|-------------------------------------|---------------------------------------------------------------------|
| AWS EC2 | Proxmox | Virtualization layer |
| AWS EKS | kubeadm | Provision Kubernetes master nodes |
| AWS EBS | Longhorn | Block storage for arbitrary applications needing persistent storage |
| AWS NLB | MetalLB | L2/L3 level load balancing |
|-------------------|-------------------------------------|---------------------------------------------------------------------|
| AWS ALB | Traefik | Reverse proxy also known as ingress controller in Kubernetes jargon |
| AWS AMP | Prometheus Operator | Monitoring and alerting |
| AWS CloudTrail | ECK Operator | Log aggregation |
| AWS DocumentDB | MongoDB Community Operator | Highly available NoSQL database |
| AWS EBS | Longhorn | Block storage for arbitrary applications needing persistent storage |
| AWS EC2 | Proxmox | Virtualization layer |
| AWS ECR | Harbor | Docker registry |
| AWS DocumentDB | MongoDB | NoSQL database |
| AWS S3 | Minio | Object storage |
| GitHub OAuth2 | Samba (Active Directory compatible) | Source of truth for authentication and authorization |
| Dex | Authelia | ACL mapping and OIDC provider which integrates with GitHub/Samba |
| GitHub | Gitea | Source code management, issue tracking |
| GitHub Actions | Drone | Build Docker images |
| Gmail | Wildduck | E-mail |
| AWS EKS | kubeadm | Provision Kubernetes master nodes |
| AWS NLB | MetalLB | L2/L3 level load balancing |
| AWS RDS for MySQL | MySQL Operator | Provision highly available relational databases |
| AWS Route53 | Bind and RFC2136 | DNS records and Let's Encrypt DNS validation |
| AWS S3 | Minio Operator | Highly available object storage |
| AWS VPC | Calico | Overlay network |
| Dex | Authelia | ACL mapping and OIDC provider which integrates with GitHub/Samba |
| GitHub Actions | Drone | Build Docker images |
| GitHub | Gitea | Source code management, issue tracking |
| GitHub OAuth2 | Samba (Active Directory compatible) | Source of truth for authentication and authorization |
| Gmail | Wildduck | E-mail |
External dependencies running as classic virtual machines:
@ -141,7 +155,8 @@ these should be handled by `tls:` section in Ingress.
## Cluster formation
Create Ubuntu 20.04 VM-s on Proxmox with local storage.
Created Ubuntu 22.04 VM-s on Proxmox with local storage.
Added some ARM64 workers by using Ubuntu 22.04 server on Raspberry Pi.
After machines have booted up and you can reach them via SSH:
@ -159,6 +174,13 @@ net.ipv4.conf.all.accept_redirects = 0
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
# Elasticsearch needs this
vm.max_map_count = 524288
# Bump inotify limits to make sure
fs.inotify.max_user_instances=1280
fs.inotify.max_user_watches=655360
EOF
sysctl --system
@ -172,32 +194,23 @@ nameserver 8.8.8.8
EOF
# Disable multipathd as Longhorn handles that itself
systemctl mask multipathd
systemctl disable multipathd
systemctl stop multipathd
# Disable Snapcraft
systemctl mask snapd
systemctl disable snapd
systemctl stop snapd
systemctl mask multipathd snapd
systemctl disable --now multipathd snapd bluetooth ModemManager hciuart wpa_supplicant packagekit
# Permit root login
sed -i -e 's/PermitRootLogin no/PermitRootLogin without-password/' /etc/ssh/sshd_config
systemctl reload ssh
cat << EOF > /root/.ssh/authorized_keys
sk-ecdsa-sha2-nistp256@openssh.com AAAAInNrLWVjZHNhLXNoYTItbmlzdHAyNTZAb3BlbnNzaC5jb20AAAAIbmlzdHAyNTYAAABBBD4/e9SWYWYoNZMkkF+NirhbmHuUgjoCap42kAq0pLIXFwIqgVTCre03VPoChIwBClc8RspLKqr5W3j0fG8QwnQAAAAEc3NoOg== lauri@lauri-x13
EOF
cat ~ubuntu/.ssh/authorized_keys > /root/.ssh/authorized_keys
userdel -f ubuntu
apt-get remove -yq cloud-init
apt-get install -yqq linux-image-generic
apt-get remove -yq cloud-init linux-image-*-kvm
```
Install packages, for Raspbian set `OS=Debian_11`
Install packages:
```bash
OS=xUbuntu_20.04
VERSION=1.23
OS=xUbuntu_22.04
VERSION=1.24
cat <<EOF | sudo tee /etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list
deb https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/ /
EOF
@ -205,17 +218,26 @@ cat <<EOF | sudo tee /etc/apt/sources.list.d/devel:kubic:libcontainers:stable:cr
deb http://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable:/cri-o:/$VERSION/$OS/ /
EOF
curl -L https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/Release.key | sudo apt-key --keyring /etc/apt/trusted.gpg.d/libcontainers.gpg add -
curl -L https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable:cri-o:$VERSION/$OS/Release.key | sudo apt-key --keyring /etc/apt/trusted.gpg.d/libcontainers-cri-o.gpg add -
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
rm -fv /etc/apt/trusted.gpg
curl -L https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/Release.key | gpg --dearmor > /etc/apt/trusted.gpg.d/libcontainers.gpg
curl -L https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable:cri-o:$VERSION/$OS/Release.key | gpg --dearmor > /etc/apt/trusted.gpg.d/libcontainers-cri-o.gpg
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg > /etc/apt/trusted.gpg.d/packages-cloud-google.gpg
echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" > /etc/apt/sources.list.d/kubernetes.list
apt-get update
apt-get install -yqq apt-transport-https curl cri-o cri-o-runc kubelet=1.23.5-00 kubectl=1.23.5-00 kubeadm=1.23.5-00
apt-get install -yqq --allow-change-held-packages apt-transport-https curl cri-o cri-o-runc kubelet=1.24.10-00 kubectl=1.24.10-00 kubeadm=1.24.10-00
cat << \EOF > /etc/containers/registries.conf
unqualified-search-registries = ["docker.io"]
# To pull Docker images from a mirror uncomment following
#[[registry]]
#prefix = "docker.io"
#location = "mirror.gcr.io"
EOF
sudo systemctl restart crio
sudo systemctl daemon-reload
sudo systemctl enable crio --now
apt-mark hold kubelet kubeadm kubectl
sed -i -e 's/unqualified-search-registries = .*/unqualified-search-registries = ["docker.io"]/' /etc/containers/registries.conf
```
On master:
@ -226,6 +248,16 @@ kubeadm init --token-ttl=120m --pod-network-cidr=10.244.0.0/16 --control-plane-e
For the `kubeadm join` command specify FQDN via `--node-name $(hostname -f)`.
Set AZ labels:
```
for j in $(seq 1 9); do
for t in master mon worker storage; do
kubectl label nodes ${t}${j}.kube.k-space.ee topology.kubernetes.io/zone=node${j}
done
done
```
After forming the cluster add taints:
```bash
@ -233,7 +265,7 @@ for j in $(seq 1 9); do
kubectl label nodes worker${j}.kube.k-space.ee node-role.kubernetes.io/worker=''
done
for j in $(seq 1 3); do
for j in $(seq 1 4); do
kubectl taint nodes mon${j}.kube.k-space.ee dedicated=monitoring:NoSchedule
kubectl label nodes mon${j}.kube.k-space.ee dedicated=monitoring
done
@ -244,15 +276,26 @@ for j in $(seq 1 4); do
done
```
On Raspberry Pi you need to take additonal steps:
* Manually enable cgroups by appending
`cgroup_memory=1 cgroup_enable=memory` to `/boot/cmdline.txt`,
* Disable swap with `swapoff -a; apt-get purge -y dphys-swapfile`
* For mounting Longhorn volumes on Rasbian install `open-iscsi`
For `arm64` nodes add suitable taint to prevent scheduling non-multiarch images on them:
```bash
kubectl taint nodes worker9.kube.k-space.ee arch=arm64:NoSchedule
```
For door controllers:
```
for j in ground front back; do
kubectl taint nodes door-${j}.kube.k-space.ee dedicated=door:NoSchedule
kubectl label nodes door-${j}.kube.k-space.ee dedicated=door
kubectl taint nodes door-${j}.kube.k-space.ee arch=arm64:NoSchedule
done
```
To reduce wear on storage:
```
echo StandardOutput=null >> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
systemctl daemon-reload
systemctl restart kubelet
```

View File

@ -36,8 +36,13 @@ kubectl -n argocd create secret generic gitea-kube-staging \
--from-literal=type=git \
--from-literal=url=git@git.k-space.ee:k-space/kube-staging \
--from-file=sshPrivateKey=id_ecdsa
kubectl -n argocd create secret generic gitea-kube-members \
--from-literal=type=git \
--from-literal=url=git@git.k-space.ee:k-space/kube-members \
--from-file=sshPrivateKey=id_ecdsa
kubectl label -n argocd secret gitea-kube argocd.argoproj.io/secret-type=repository
kubectl label -n argocd secret gitea-kube-staging argocd.argoproj.io/secret-type=repository
kubectl label -n argocd secret gitea-kube-members argocd.argoproj.io/secret-type=repository
rm -fv id_ecdsa
```

View File

@ -5,17 +5,16 @@ metadata:
namespace: argocd
spec:
project: default
destination:
server: 'https://kubernetes.default.svc'
namespace: elastic-system
syncPolicy:
automated: {}
syncOptions:
- CreateNamespace=true
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: elastic-system
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: elastic-system
syncPolicy:
syncOptions:
- CreateNamespace=true
ignoreDifferences:
- group: admissionregistration.k8s.io
kind: ValidatingWebhookConfiguration

View File

@ -1,17 +1,17 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: foobar
name: grafana
namespace: argocd
spec:
project: default
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: foobar
path: grafana
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: foobar
namespace: grafana
syncPolicy:
syncOptions:
- CreateNamespace=true

View File

@ -0,0 +1,17 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: logmower
namespace: argocd
spec:
project: default
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: logmower
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: logmower
syncPolicy:
syncOptions:
- CreateNamespace=true

View File

@ -0,0 +1,17 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: members
namespace: argocd
spec:
project: default
source:
repoURL: 'git@git.k-space.ee:k-space/kube-members.git'
path: .
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: members
syncPolicy:
syncOptions:
- CreateNamespace=true

View File

@ -16,7 +16,6 @@ server:
ingress:
enabled: true
annotations:
cert-manager.io/cluster-issuer: default
external-dns.alpha.kubernetes.io/target: traefik.k-space.ee
traefik.ingress.kubernetes.io/router.entrypoints: websecure
traefik.ingress.kubernetes.io/router.tls: "true"
@ -24,8 +23,7 @@ server:
- argocd.k-space.ee
tls:
- hosts:
- argocd.k-space.ee
secretName: argocd-server-tls
- "*.k-space.ee"
configEnabled: true
config:
admin.enabled: "false"

View File

@ -162,8 +162,8 @@ kubectl -n argocd create secret generic argocd-secret \
kubectl get secret -n authelia oidc-secrets -o json \
| jq '.data."oidc-secrets.yml"' -r | base64 -d | yq -o json \
| jq '.identity_providers.oidc.clients[] | select(.id == "argocd") | .secret' -r)
kubectl -n monitoring delete secret oidc-secret
kubectl -n monitoring create secret generic oidc-secret \
kubectl -n grafana delete secret oidc-secret
kubectl -n grafana create secret generic oidc-secret \
--from-literal=GF_AUTH_GENERIC_OAUTH_CLIENT_SECRET=$( \
kubectl get secret -n authelia oidc-secrets -o json \
| jq '.data."oidc-secrets.yml"' -r | base64 -d | yq -o json \

View File

@ -295,7 +295,6 @@ metadata:
labels:
app.kubernetes.io/name: authelia
annotations:
cert-manager.io/cluster-issuer: default
external-dns.alpha.kubernetes.io/target: traefik.k-space.ee
kubernetes.io/tls-acme: "true"
traefik.ingress.kubernetes.io/router.entryPoints: websecure
@ -315,8 +314,7 @@ spec:
number: 80
tls:
- hosts:
- auth.k-space.ee
secretName: authelia-tls
- "*.k-space.ee"
---
apiVersion: traefik.containo.us/v1alpha1
kind: Middleware

View File

@ -1,7 +1,16 @@
To apply changes:
```
kubectl apply -n camtiler -f application.yml -f persistence.yml -f mongoexpress.yml -f mongodb-support.yml -f networkpolicy-base.yml -f minio-support.yml
kubectl apply -n camtiler \
-f application.yml \
-f persistence.yml \
-f mongoexpress.yml \
-f mongodb-support.yml \
-f camera-tiler.yml \
-f logmower.yml \
-f ingress.yml \
-f network-policies.yml \
-f networkpolicy-base.yml
```
To deploy changes:
@ -15,15 +24,16 @@ To initialize secrets:
```
kubectl create secret generic -n camtiler mongodb-application-readwrite-password --from-literal="password=$(cat /dev/urandom | base64 | head -c 30)"
kubectl create secret generic -n camtiler mongodb-application-readonly-password --from-literal="password=$(cat /dev/urandom | base64 | head -c 30)"
kubectl create secret generic -n camtiler minio-secret \
--from-literal=accesskey=application \
--from-literal=secretkey=$(cat /dev/urandom | base64 | head -c 30)
kubectl create secret generic -n camtiler minio-env-configuration \
--from-literal="MINIO_BROWSER=off" \
kubectl create secret generic -n camtiler minio-secrets \
--from-literal="MINIO_ROOT_USER=root" \
--from-literal="MINIO_ROOT_PASSWORD=$(cat /dev/urandom | base64 | head -c 30)" \
--from-literal="MINIO_STORAGE_CLASS_STANDARD=EC:4"
--from-literal="MINIO_ROOT_PASSWORD=$(cat /dev/urandom | base64 | head -c 30)"
kubectl -n camtiler create secret generic camera-secrets \
--from-literal=username=... \
--from-literal=password=...
```
To restart all deployments:
```
for j in $(kubectl get deployments -n camtiler -o name); do kubectl rollout restart -n camtiler $j; done
```

View File

@ -1,397 +1,4 @@
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: camtiler
annotations:
keel.sh/policy: force
keel.sh/trigger: poll
spec:
revisionHistoryLimit: 0
replicas: 2
selector:
matchLabels:
app.kubernetes.io/name: camtiler
template:
metadata:
labels:
app.kubernetes.io/name: camtiler
component: camtiler
spec:
serviceAccountName: camtiler
containers:
- name: camtiler
image: harbor.k-space.ee/k-space/camera-tiler:latest
securityContext:
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 1000
ports:
- containerPort: 5001
name: "http"
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: log-viewer-frontend
annotations:
keel.sh/policy: force
keel.sh/trigger: poll
spec:
revisionHistoryLimit: 0
replicas: 2
selector:
matchLabels:
app.kubernetes.io/name: log-viewer-frontend
template:
metadata:
labels:
app.kubernetes.io/name: log-viewer-frontend
spec:
containers:
- name: log-viewer-frontend
image: harbor.k-space.ee/k-space/log-viewer-frontend:latest
# securityContext:
# readOnlyRootFilesystem: true
# runAsNonRoot: true
# runAsUser: 1000
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: log-viewer-backend
annotations:
keel.sh/policy: force
keel.sh/trigger: poll
spec:
revisionHistoryLimit: 0
replicas: 3
selector:
matchLabels:
app.kubernetes.io/name: log-viewer-backend
template:
metadata:
labels:
app.kubernetes.io/name: log-viewer-backend
spec:
containers:
- name: log-backend-backend
image: harbor.k-space.ee/k-space/log-viewer:latest
securityContext:
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 1000
env:
- name: MONGO_URI
valueFrom:
secretKeyRef:
name: mongodb-application-readwrite
key: connectionString.standard
- name: MINIO_BUCKET
value: application
- name: MINIO_HOSTNAME
value: cams-s3.k-space.ee
- name: MINIO_PORT
value: "443"
- name: MINIO_SCHEME
value: "https"
- name: MINIO_SECRET_KEY
valueFrom:
secretKeyRef:
name: minio-secret
key: secretkey
- name: MINIO_ACCESS_KEY
valueFrom:
secretKeyRef:
name: minio-secret
key: accesskey
---
apiVersion: v1
kind: Service
metadata:
name: log-viewer-frontend
spec:
type: ClusterIP
selector:
app.kubernetes.io/name: log-viewer-frontend
ports:
- protocol: TCP
port: 3003
---
apiVersion: v1
kind: Service
metadata:
name: log-viewer-backend
spec:
type: ClusterIP
selector:
app.kubernetes.io/name: log-viewer-backend
ports:
- protocol: TCP
port: 3002
---
apiVersion: v1
kind: Service
metadata:
name: camtiler
labels:
component: camtiler
spec:
type: ClusterIP
selector:
app.kubernetes.io/name: camtiler
component: camtiler
ports:
- protocol: TCP
port: 5001
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: camtiler
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: camtiler
rules:
- apiGroups:
- ""
resources:
- services
verbs:
- list
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: camtiler
subjects:
- kind: ServiceAccount
name: camtiler
apiGroup: ""
roleRef:
kind: Role
name: camtiler
apiGroup: ""
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: camtiler
annotations:
kubernetes.io/ingress.class: traefik
# Following specifies the certificate issuer defined in
# ../cert-manager/issuer.yml
# This is where the HTTPS certificates for the
# `tls:` section below are obtained from
cert-manager.io/cluster-issuer: default
# This tells Traefik this Ingress object is associated with the
# https:// entrypoint
# Global http:// to https:// redirect is enabled in
# ../traefik/values.yml using `globalArguments`
traefik.ingress.kubernetes.io/router.entrypoints: websecure
# Following enables Authelia intercepting middleware
# which makes sure user is authenticated and then
# proceeds to inject Remote-User header for the application
traefik.ingress.kubernetes.io/router.middlewares: traefik-sso@kubernetescrd
traefik.ingress.kubernetes.io/router.tls: "true"
# Following tells external-dns to add CNAME entry which makes
# cams.k-space.ee point to same IP address as traefik.k-space.ee
# The A record for traefik.k-space.ee is created via annotation
# added in ../traefik/ingress.yml
external-dns.alpha.kubernetes.io/target: traefik.k-space.ee
spec:
rules:
- host: cams.k-space.ee
http:
paths:
- pathType: Prefix
path: "/tiled"
backend:
service:
name: camtiler
port:
number: 5001
- pathType: Prefix
path: "/events"
backend:
service:
name: log-viewer-backend
port:
number: 3002
- pathType: Prefix
path: "/"
backend:
service:
name: log-viewer-frontend
port:
number: 3003
tls:
- hosts:
- cams.k-space.ee
secretName: camtiler-tls
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: camera-motion-detect
spec:
podSelector:
matchLabels:
component: camdetect
policyTypes:
- Ingress
- Egress
ingress:
- from:
- podSelector:
matchLabels:
component: camtiler
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: prometheus-operator
podSelector:
matchLabels:
app.kubernetes.io/name: prometheus
egress:
- to:
- ipBlock:
# Permit access to cameras outside the cluster
cidr: 100.102.0.0/16
- to:
- podSelector:
matchLabels:
app: mongodb-svc
ports:
- port: 27017
- to:
- podSelector:
matchLabels:
v1.min.io/tenant: minio
ports:
- port: 9000
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: camera-tiler
spec:
podSelector:
matchLabels:
component: camtiler
policyTypes:
- Ingress
- Egress
egress:
- to:
- podSelector:
matchLabels:
component: camdetect
ports:
- port: 5000
ingress:
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: prometheus-operator
podSelector:
matchLabels:
app.kubernetes.io/name: prometheus
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: traefik
podSelector:
matchLabels:
app.kubernetes.io/name: traefik
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: log-viewer-backend
spec:
podSelector:
matchLabels:
app.kubernetes.io/name: log-viewer-backend
policyTypes:
- Ingress
- Egress
egress:
- to:
- podSelector:
matchLabels:
app: mongodb-svc
- to:
# Minio access via Traefik's public endpoint
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: traefik
podSelector:
matchLabels:
app.kubernetes.io/name: traefik
ingress:
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: traefik
podSelector:
matchLabels:
app.kubernetes.io/name: traefik
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: log-viewer-frontend
spec:
podSelector:
matchLabels:
app.kubernetes.io/name: log-viewer-frontend
policyTypes:
- Ingress
- Egress
ingress:
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: traefik
podSelector:
matchLabels:
app.kubernetes.io/name: traefik
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: minio
annotations:
kubernetes.io/ingress.class: traefik
cert-manager.io/cluster-issuer: default
traefik.ingress.kubernetes.io/router.entrypoints: websecure
traefik.ingress.kubernetes.io/router.tls: "true"
external-dns.alpha.kubernetes.io/target: traefik.k-space.ee
spec:
rules:
- host: cams-s3.k-space.ee
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: minio
port:
number: 80
tls:
- hosts:
- cams-s3.k-space.ee
secretName: cams-s3-tls
---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
@ -482,12 +89,13 @@ spec:
metadata:
name: foobar
labels:
component: camdetect
app.kubernetes.io/name: foobar
component: camera-motion-detect
spec:
type: ClusterIP
selector:
app.kubernetes.io/name: foobar
component: camdetect
component: camera-motion-detect
ports:
- protocol: TCP
port: 80
@ -502,14 +110,15 @@ spec:
keel.sh/policy: force
keel.sh/trigger: poll
spec:
revisionHistoryLimit: 0
replicas: 1
# Make sure we do not congest the network during rollout
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 0
maxUnavailable: 1
# Swap following two with replicas: 2
maxSurge: 1
maxUnavailable: 0
selector:
matchLabels:
app.kubernetes.io/name: foobar
@ -517,18 +126,25 @@ spec:
metadata:
labels:
app.kubernetes.io/name: foobar
component: camdetect
component: camera-motion-detect
spec:
containers:
- name: camdetect
- name: camera-motion-detect
image: harbor.k-space.ee/k-space/camera-motion-detect:latest
starupProbe:
httpGet:
path: /healthz
port: 5000
initialDelaySeconds: 2
periodSeconds: 180
timeoutSeconds: 60
readinessProbe:
httpGet:
path: /readyz
port: 5000
initialDelaySeconds: 10
periodSeconds: 180
timeoutSeconds: 60
initialDelaySeconds: 60
periodSeconds: 60
timeoutSeconds: 5
ports:
- containerPort: 5000
name: "http"
@ -538,7 +154,7 @@ spec:
cpu: "200m"
limits:
memory: "256Mi"
cpu: "1"
cpu: "4000m"
securityContext:
readOnlyRootFilesystem: true
runAsNonRoot: true
@ -566,13 +182,13 @@ spec:
- name: AWS_SECRET_ACCESS_KEY
valueFrom:
secretKeyRef:
name: minio-secret
key: secretkey
name: minio-secrets
key: MINIO_ROOT_PASSWORD
- name: AWS_ACCESS_KEY_ID
valueFrom:
secretKeyRef:
name: minio-secret
key: accesskey
name: minio-secrets
key: MINIO_ROOT_USER
# Make sure 2+ pods of same camera are scheduled on different hosts
affinity:
@ -580,7 +196,7 @@ spec:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
- key: app.kubernetes.io/name
operator: In
values:
- foobar
@ -594,18 +210,7 @@ spec:
labelSelector:
matchLabels:
app.kubernetes.io/name: foobar
component: camdetect
---
apiVersion: monitoring.coreos.com/v1
kind: PodMonitor
metadata:
name: camtiler
spec:
selector: {}
podMetricsEndpoints:
- port: http
podTargetLabels:
- app.kubernetes.io/name
component: camera-motion-detect
---
apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
@ -616,21 +221,21 @@ spec:
- name: cameras
rules:
- alert: CameraLost
expr: rate(camdetect_rx_frames_total[2m]) < 1
expr: rate(camtiler_frames_total{stage="downloaded"}[1m]) < 1
for: 2m
labels:
severity: warning
annotations:
summary: Camera feed stopped
- alert: CameraServerRoomMotion
expr: camdetect_event_active {app="camdetect-server-room"} > 0
expr: rate(camtiler_events_total{app_kubernetes_io_name="server-room"}[30m]) > 0
for: 1m
labels:
severity: warning
annotations:
summary: Motion was detected in server room
- alert: CameraSlowUploads
expr: rate(camdetect_upload_dropped_frames_total[2m]) > 1
expr: camtiler_queue_frames{stage="upload"} > 10
for: 5m
labels:
severity: warning
@ -638,13 +243,20 @@ spec:
summary: Motion detect snapshots are piling up and
not getting uploaded to S3
- alert: CameraSlowProcessing
expr: rate(camdetect_download_dropped_frames_total[2m]) > 1
expr: camtiler_queue_frames{stage="download"} > 10
for: 5m
labels:
severity: warning
annotations:
summary: Motion detection processing pipeline is not keeping up
with incoming frames
- alert: CameraResourcesThrottled
expr: sum by (pod) (rate(container_cpu_cfs_throttled_periods_total{namespace="camtiler"}[1m])) > 0
for: 5m
labels:
severity: warning
annotations:
summary: CPU limits are bottleneck
---
apiVersion: k-space.ee/v1alpha1
kind: Camera
@ -653,6 +265,7 @@ metadata:
spec:
target: http://user@workshop.cam.k-space.ee:8080/?action=stream
secretRef: camera-secrets
replicas: 1
---
apiVersion: k-space.ee/v1alpha1
kind: Camera
@ -661,6 +274,7 @@ metadata:
spec:
target: http://user@server-room.cam.k-space.ee:8080/?action=stream
secretRef: camera-secrets
replicas: 1
---
apiVersion: k-space.ee/v1alpha1
kind: Camera
@ -669,6 +283,7 @@ metadata:
spec:
target: http://user@printer.cam.k-space.ee:8080/?action=stream
secretRef: camera-secrets
replicas: 1
---
apiVersion: k-space.ee/v1alpha1
kind: Camera
@ -677,6 +292,7 @@ metadata:
spec:
target: http://user@chaos.cam.k-space.ee:8080/?action=stream
secretRef: camera-secrets
replicas: 1
---
apiVersion: k-space.ee/v1alpha1
kind: Camera
@ -685,6 +301,7 @@ metadata:
spec:
target: http://user@cyber.cam.k-space.ee:8080/?action=stream
secretRef: camera-secrets
replicas: 1
---
apiVersion: k-space.ee/v1alpha1
kind: Camera
@ -693,6 +310,7 @@ metadata:
spec:
target: http://user@kitchen.cam.k-space.ee:8080/?action=stream
secretRef: camera-secrets
replicas: 1
---
apiVersion: k-space.ee/v1alpha1
kind: Camera
@ -701,6 +319,7 @@ metadata:
spec:
target: http://user@back-door.cam.k-space.ee:8080/?action=stream
secretRef: camera-secrets
replicas: 1
---
apiVersion: k-space.ee/v1alpha1
kind: Camera
@ -709,3 +328,4 @@ metadata:
spec:
target: http://user@ground-door.cam.k-space.ee:8080/?action=stream
secretRef: camera-secrets
replicas: 1

98
camtiler/camera-tiler.yml Normal file
View File

@ -0,0 +1,98 @@
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: camera-tiler
annotations:
keel.sh/policy: force
keel.sh/trigger: poll
spec:
revisionHistoryLimit: 0
replicas: 2
selector:
matchLabels: &selectorLabels
app.kubernetes.io/name: camtiler
component: camera-tiler
template:
metadata:
labels: *selectorLabels
spec:
serviceAccountName: camera-tiler
containers:
- name: camera-tiler
image: harbor.k-space.ee/k-space/camera-tiler:latest
securityContext:
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 1000
ports:
- containerPort: 5001
name: "http"
resources:
requests:
memory: "200Mi"
cpu: "100m"
limits:
memory: "500Mi"
cpu: "4000m"
---
apiVersion: v1
kind: Service
metadata:
name: camera-tiler
labels:
app.kubernetes.io/name: camtiler
component: camera-tiler
spec:
type: ClusterIP
selector:
app.kubernetes.io/name: camtiler
component: camera-tiler
ports:
- protocol: TCP
port: 5001
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: camera-tiler
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: camera-tiler
rules:
- apiGroups:
- ""
resources:
- services
verbs:
- list
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: camera-tiler
subjects:
- kind: ServiceAccount
name: camera-tiler
apiGroup: ""
roleRef:
kind: Role
name: camera-tiler
apiGroup: ""
---
apiVersion: monitoring.coreos.com/v1
kind: PodMonitor
metadata:
name: camtiler
spec:
selector:
matchLabels:
app.kubernetes.io/name: camtiler
component: camera-tiler
podMetricsEndpoints:
- port: http
podTargetLabels:
- app.kubernetes.io/name
- component

67
camtiler/ingress.yml Normal file
View File

@ -0,0 +1,67 @@
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: camtiler
annotations:
kubernetes.io/ingress.class: traefik
traefik.ingress.kubernetes.io/router.entrypoints: websecure
traefik.ingress.kubernetes.io/router.middlewares: traefik-sso@kubernetescrd,camtiler-redirect@kubernetescrd
traefik.ingress.kubernetes.io/router.tls: "true"
external-dns.alpha.kubernetes.io/target: traefik.k-space.ee
spec:
rules:
- host: cams.k-space.ee
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: logmower-frontend
port:
number: 8080
- host: cam.k-space.ee
http:
paths:
- pathType: Prefix
path: "/tiled"
backend:
service:
name: camera-tiler
port:
number: 5001
- pathType: Prefix
path: "/m"
backend:
service:
name: camera-tiler
port:
number: 5001
- pathType: Prefix
path: "/events"
backend:
service:
name: logmower-eventsource
port:
number: 3002
- pathType: Prefix
path: "/"
backend:
service:
name: logmower-frontend
port:
number: 8080
tls:
- hosts:
- "*.k-space.ee"
---
apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:
name: redirect
spec:
redirectRegex:
regex: ^https://cams.k-space.ee/(.*)$
replacement: https://cam.k-space.ee/$1
permanent: false

137
camtiler/logmower.yml Normal file
View File

@ -0,0 +1,137 @@
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: logmower-eventsource
spec:
revisionHistoryLimit: 0
replicas: 2
selector:
matchLabels: &selectorLabels
app.kubernetes.io/name: camtiler
component: logmower-eventsource
template:
metadata:
labels: *selectorLabels
spec:
containers:
- name: logmower-eventsource
image: harbor.k-space.ee/k-space/logmower-eventsource
ports:
- containerPort: 3002
name: nodejs
env:
- name: MONGO_COLLECTION
value: eventlog
- name: MONGODB_HOST
valueFrom:
secretKeyRef:
name: mongodb-application-readonly
key: connectionString.standard
- name: BACKEND
value: 'camtiler'
- name: BACKEND_BROKER_URL
value: 'http://logmower-event-broker'
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: logmower-event-broker
spec:
revisionHistoryLimit: 0
replicas: 5
selector:
matchLabels: &selectorLabels
app.kubernetes.io/name: camtiler
component: logmower-event-broker
template:
metadata:
labels: *selectorLabels
spec:
containers:
- name: logmower-event-broker
image: harbor.k-space.ee/k-space/camera-event-broker
ports:
- containerPort: 3000
env:
- name: AWS_SECRET_ACCESS_KEY
valueFrom:
secretKeyRef:
name: minio-secrets
key: MINIO_ROOT_PASSWORD
- name: AWS_ACCESS_KEY_ID
valueFrom:
secretKeyRef:
name: minio-secrets
key: MINIO_ROOT_USER
- name: MINIO_BUCKET
value: 'application'
- name: MINIO_HOSTNAME
value: 'cams-s3.k-space.ee'
- name: MINIO_PORT
value: '443'
- name: MINIO_SCHEMA
value: 'https'
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: logmower-frontend
spec:
revisionHistoryLimit: 0
replicas: 2
selector:
matchLabels: &selectorLabels
app.kubernetes.io/name: camtiler
component: logmower-frontend
template:
metadata:
labels: *selectorLabels
spec:
containers:
- name: logmower-frontend
image: harbor.k-space.ee/k-space/logmower-frontend
ports:
- containerPort: 8080
name: http
---
apiVersion: v1
kind: Service
metadata:
name: logmower-frontend
spec:
type: ClusterIP
selector:
app.kubernetes.io/name: camtiler
component: logmower-frontend
ports:
- protocol: TCP
port: 8080
---
apiVersion: v1
kind: Service
metadata:
name: logmower-eventsource
spec:
type: ClusterIP
selector:
app.kubernetes.io/name: camtiler
component: logmower-eventsource
ports:
- protocol: TCP
port: 3002
---
apiVersion: v1
kind: Service
metadata:
name: logmower-event-broker
spec:
type: ClusterIP
selector:
app.kubernetes.io/name: camtiler
component: logmower-event-broker
ports:
- protocol: TCP
port: 80
targetPort: 3000

View File

@ -1 +0,0 @@
../shared/minio-support.yml

199
camtiler/minio.yml Normal file
View File

@ -0,0 +1,199 @@
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: minio
labels:
app.kubernetes.io/name: minio
spec:
selector:
matchLabels:
app.kubernetes.io/name: minio
serviceName: minio-svc
replicas: 4
podManagementPolicy: Parallel
template:
metadata:
labels:
app.kubernetes.io/name: minio
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- minio
topologyKey: kubernetes.io/hostname
nodeSelector:
dedicated: storage
tolerations:
- key: dedicated
operator: Equal
value: storage
effect: NoSchedule
containers:
- name: minio
env:
- name: MINIO_PROMETHEUS_AUTH_TYPE
value: public
envFrom:
- secretRef:
name: minio-secrets
image: minio/minio:RELEASE.2022-12-12T19-27-27Z
args:
- server
- http://minio-{0...3}.minio-svc.camtiler.svc.cluster.local/data
- --address
- 0.0.0.0:9000
- --console-address
- 0.0.0.0:9001
ports:
- containerPort: 9000
name: http
- containerPort: 9001
name: console
readinessProbe:
httpGet:
path: /minio/health/ready
port: 9000
initialDelaySeconds: 2
periodSeconds: 5
resources:
requests:
cpu: 300m
memory: 1Gi
limits:
cpu: 4000m
memory: 2Gi
volumeMounts:
- name: minio-data
mountPath: /data
volumeClaimTemplates:
- metadata:
name: minio-data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: '30Gi'
storageClassName: minio
---
apiVersion: v1
kind: Service
metadata:
name: minio
spec:
sessionAffinity: ClientIP
type: ClusterIP
ports:
- port: 80
targetPort: 9000
protocol: TCP
name: http
selector:
app.kubernetes.io/name: minio
---
kind: Service
apiVersion: v1
metadata:
name: minio-svc
spec:
selector:
app.kubernetes.io/name: minio
clusterIP: None
publishNotReadyAddresses: true
ports:
- name: http
port: 9000
- name: console
port: 9001
---
apiVersion: monitoring.coreos.com/v1
kind: PodMonitor
metadata:
name: minio
spec:
selector:
matchLabels:
app.kubernetes.io/name: minio
podMetricsEndpoints:
- port: http
path: /minio/v2/metrics/node
podTargetLabels:
- app.kubernetes.io/name
---
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: minio
spec:
endpoints:
- bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
honorLabels: true
port: minio
path: /minio/v2/metrics/cluster
selector:
matchLabels:
app.kubernetes.io/name: minio
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: minio
annotations:
kubernetes.io/ingress.class: traefik
traefik.ingress.kubernetes.io/router.entrypoints: websecure
traefik.ingress.kubernetes.io/router.tls: "true"
external-dns.alpha.kubernetes.io/target: traefik.k-space.ee
spec:
rules:
- host: cams-s3.k-space.ee
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: minio-svc
port:
name: http
tls:
- hosts:
- "*.k-space.ee"
---
apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
name: minio
spec:
groups:
- name: minio
rules:
- alert: MinioClusterDiskOffline
expr: minio_cluster_disk_offline_total > 0
for: 0m
labels:
severity: critical
annotations:
summary: Minio cluster disk offline (instance {{ $labels.instance }})
description: "Minio cluster disk is offline"
- alert: MinioNodeDiskOffline
expr: minio_cluster_nodes_offline_total > 0
for: 0m
labels:
severity: critical
annotations:
summary: Minio node disk offline (instance {{ $labels.instance }})
description: "Minio cluster node disk is offline"
- alert: MinioDiskSpaceUsage
expr: disk_storage_available / disk_storage_total * 100 < 10
for: 0m
labels:
severity: warning
annotations:
summary: Minio disk space usage (instance {{ $labels.instance }})
description: "Minio available free space is low (< 10%)"

View File

@ -7,9 +7,10 @@ spec:
additionalMongodConfig:
systemLog:
quiet: true
members: 3
members: 2
arbiters: 1
type: ReplicaSet
version: "5.0.9"
version: "6.0.3"
security:
authentication:
modes: ["SCRAM"]
@ -27,7 +28,7 @@ spec:
passwordSecretRef:
name: mongodb-application-readonly-password
roles:
- name: readOnly
- name: read
db: application
scramCredentialsSecretName: mongodb-application-readonly
statefulSet:
@ -35,6 +36,24 @@ spec:
logLevel: WARN
template:
spec:
containers:
- name: mongod
resources:
requests:
cpu: 100m
memory: 512Mi
limits:
cpu: 500m
memory: 1Gi
volumeMounts:
- name: journal-volume
mountPath: /data/journal
- name: mongodb-agent
resources:
requests:
cpu: 1m
memory: 100Mi
limits: {}
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
@ -55,8 +74,21 @@ spec:
volumeClaimTemplates:
- metadata:
name: logs-volume
labels:
usecase: logs
spec:
storageClassName: local-path
storageClassName: mongo
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Mi
- metadata:
name: journal-volume
labels:
usecase: journal
spec:
storageClassName: mongo
accessModes:
- ReadWriteOnce
resources:
@ -64,67 +96,12 @@ spec:
storage: 512Mi
- metadata:
name: data-volume
labels:
usecase: data
spec:
storageClassName: local-path
storageClassName: mongo
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
---
apiVersion: minio.min.io/v2
kind: Tenant
metadata:
name: minio
annotations:
prometheus.io/path: /minio/prometheus/metrics
prometheus.io/port: "9000"
prometheus.io/scrape: "true"
spec:
credsSecret:
name: minio-secret
buckets:
- name: application
requestAutoCert: false
users:
- name: minio-user-0
pools:
- name: pool-0
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: v1.min.io/tenant
operator: In
values:
- minio
- key: v1.min.io/pool
operator: In
values:
- pool-0
topologyKey: kubernetes.io/hostname
resources:
requests:
cpu: '1'
memory: 512Mi
servers: 4
volumesPerServer: 1
volumeClaimTemplate:
metadata:
name: data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: '30Gi'
storageClassName: local-path
status: {}
nodeSelector:
dedicated: storage
tolerations:
- key: dedicated
operator: Equal
value: storage
effect: NoSchedule

View File

@ -0,0 +1,192 @@
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: camera-motion-detect
spec:
podSelector:
matchLabels:
component: camera-motion-detect
policyTypes:
- Ingress
- Egress
ingress:
- from:
- podSelector:
matchLabels:
app.kubernetes.io/name: camtiler
component: camera-tiler
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: prometheus-operator
podSelector:
matchLabels:
app.kubernetes.io/name: prometheus
egress:
- to:
- ipBlock:
# Permit access to cameras outside the cluster
cidr: 100.102.0.0/16
- to:
- podSelector:
matchLabels:
app: mongodb-svc
ports:
- port: 27017
- to:
- podSelector:
matchLabels:
app.kubernetes.io/name: minio
ports:
- port: 9000
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: camera-tiler
spec:
podSelector:
matchLabels:
app.kubernetes.io/name: camtiler
component: camera-tiler
policyTypes:
- Ingress
- Egress
egress:
- to:
- podSelector:
matchLabels:
component: camera-motion-detect
ports:
- port: 5000
ingress:
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: prometheus-operator
podSelector:
matchLabels:
app.kubernetes.io/name: prometheus
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: traefik
podSelector:
matchLabels:
app.kubernetes.io/name: traefik
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: logmower-eventsource
spec:
podSelector:
matchLabels:
app.kubernetes.io/name: camtiler
component: logmower-eventsource
policyTypes:
- Ingress
- Egress
egress:
- to:
- podSelector:
matchLabels:
app: mongodb-svc
- podSelector:
matchLabels:
component: logmower-event-broker
ingress:
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: traefik
podSelector:
matchLabels:
app.kubernetes.io/name: traefik
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: logmower-event-broker
spec:
podSelector:
matchLabels:
app.kubernetes.io/name: camtiler
component: logmower-event-broker
policyTypes:
- Ingress
- Egress
egress:
- to:
# Minio access via Traefik's public endpoint
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: traefik
podSelector:
matchLabels:
app.kubernetes.io/name: traefik
ingress:
- from:
- podSelector:
matchLabels:
component: logmower-eventsource
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: logmower-frontend
spec:
podSelector:
matchLabels:
app.kubernetes.io/name: camtiler
component: logmower-frontend
policyTypes:
- Ingress
- Egress
ingress:
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: traefik
podSelector:
matchLabels:
app.kubernetes.io/name: traefik
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: minio
spec:
podSelector:
matchLabels:
app.kubernetes.io/name: minio
policyTypes:
- Ingress
- Egress
egress:
- ports:
- port: http
to:
- podSelector:
matchLabels:
app.kubernetes.io/name: minio
ingress:
- ports:
- port: http
from:
- podSelector: {}
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: traefik
podSelector:
matchLabels:
app.kubernetes.io/name: traefik
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: prometheus-operator
podSelector:
matchLabels:
app.kubernetes.io/name: prometheus

View File

@ -77,14 +77,11 @@ steps:
- echo "ENV GIT_COMMIT_TIMESTAMP=$(git log -1 --format=%cd --date=iso-strict)" >> Dockerfile
- cat Dockerfile
- name: docker
image: plugins/docker
image: harbor.k-space.ee/k-space/drone-kaniko
settings:
repo: harbor.k-space.ee/${DRONE_REPO}
repo: ${DRONE_REPO}
tags: latest-arm64
registry: harbor.k-space.ee
squash: true
experimental: true
mtu: 1300
username:
from_secret: docker_username
password:
@ -109,14 +106,11 @@ steps:
- echo "ENV GIT_COMMIT_TIMESTAMP=$(git log -1 --format=%cd --date=iso-strict)" >> Dockerfile
- cat Dockerfile
- name: docker
image: plugins/docker
image: harbor.k-space.ee/k-space/drone-kaniko
settings:
repo: harbor.k-space.ee/${DRONE_REPO}
repo: ${DRONE_REPO}
tags: latest-amd64
registry: harbor.k-space.ee
squash: true
experimental: true
mtu: 1300
storage_driver: vfs
username:
from_secret: docker_username
@ -130,8 +124,8 @@ steps:
- name: manifest
image: plugins/manifest
settings:
target: harbor.k-space.ee/${DRONE_REPO}:latest
template: harbor.k-space.ee/${DRONE_REPO}:latest-ARCH
target: ${DRONE_REPO}:latest
template: ${DRONE_REPO}:latest-ARCH
platforms:
- linux/amd64
- linux/arm64

View File

@ -83,7 +83,6 @@ kind: Ingress
metadata:
name: drone
annotations:
cert-manager.io/cluster-issuer: default
external-dns.alpha.kubernetes.io/target: traefik.k-space.ee
kubernetes.io/ingress.class: traefik
traefik.ingress.kubernetes.io/router.entrypoints: websecure
@ -91,8 +90,7 @@ metadata:
spec:
tls:
- hosts:
- "drone.k-space.ee"
secretName: drone-tls
- "*.k-space.ee"
rules:
- host: "drone.k-space.ee"
http:

View File

@ -1,7 +1,7 @@
# elastic-operator
```
wget https://download.elastic.co/downloads/eck/2.2.0/crds.yaml
wget https://download.elastic.co/downloads/eck/2.2.0/operator.yaml
wget https://download.elastic.co/downloads/eck/2.4.0/crds.yaml
wget https://download.elastic.co/downloads/eck/2.4.0/operator.yaml
kubectl apply -n elastic-system -f application.yml -f crds.yaml -f operator.yaml
```

View File

@ -1,15 +1,16 @@
---
apiVersion: beat.k8s.elastic.co/v1beta1
kind: Beat
metadata:
name: filebeat
spec:
type: filebeat
version: 8.4.1
version: 8.4.3
elasticsearchRef:
name: elasticsearch
kibanaRef:
name: kibana
config:
logging:
level: warning
http:
enabled: true
port: 5066
@ -24,50 +25,15 @@ spec:
type: container
paths:
- /var/log/containers/*${data.kubernetes.container.id}.log
processors:
- drop_fields:
fields:
- stream
- target
- host
ignore_missing: true
- rename:
fields:
- from: "kubernetes.node.name"
to: "host"
- from: "kubernetes.pod.name"
to: "pod"
- from: "kubernetes.labels.app"
to: "app"
- from: "kubernetes.namespace"
to: "namespace"
ignore_missing: true
- drop_fields:
fields:
- input
- agent
- container
- ecs
- host
- kubernetes
- log
- "@metadata"
ignore_missing: true
- decode_json_fields:
fields:
- message
max_depth: 2
expand_keys: true
target: ""
add_error_key: true
daemonSet:
podTemplate:
metadata:
annotations:
co.elastic.logs/enabled: 'false'
spec:
serviceAccountName: filebeat
automountServiceAccountToken: true
terminationGracePeriodSeconds: 30
dnsPolicy: ClusterFirstWithHostNet
hostNetwork: true # Allows to provide richer host metadata
containers:
- name: filebeat
securityContext:
@ -84,6 +50,12 @@ spec:
valueFrom:
fieldRef:
fieldPath: spec.nodeName
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 100Mi
- name: exporter
image: sepa/beats-exporter
args:
@ -108,6 +80,104 @@ spec:
- operator: "Exists"
effect: "NoSchedule"
---
apiVersion: beat.k8s.elastic.co/v1beta1
kind: Beat
metadata:
name: filebeat-syslog
spec:
type: filebeat
version: 8.4.3
elasticsearchRef:
name: elasticsearch
config:
logging:
level: warning
http:
enabled: true
port: 5066
filebeat:
inputs:
- type: syslog
format: rfc5424
protocol.udp:
host: "0.0.0.0:1514"
- type: syslog
format: rfc5424
protocol.tcp:
host: "0.0.0.0:1514"
deployment:
replicas: 2
podTemplate:
metadata:
annotations:
co.elastic.logs/enabled: 'false'
spec:
terminationGracePeriodSeconds: 30
containers:
- name: filebeat
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 100Mi
ports:
- containerPort: 1514
name: syslog
protocol: UDP
volumeMounts:
- name: filebeat-registry
mountPath: /usr/share/filebeat/data
- name: exporter
image: sepa/beats-exporter
args:
- -p=5066
ports:
- containerPort: 8080
name: exporter
protocol: TCP
volumes:
- name: filebeat-registry
emptyDir: {}
---
apiVersion: v1
kind: Service
metadata:
name: filebeat-syslog-udp
annotations:
external-dns.alpha.kubernetes.io/hostname: syslog.k-space.ee
metallb.universe.tf/allow-shared-ip: syslog.k-space.ee
spec:
type: LoadBalancer
externalTrafficPolicy: Local
loadBalancerIP: 172.20.51.4
ports:
- name: filebeat-syslog
port: 514
protocol: UDP
targetPort: 1514
selector:
beat.k8s.elastic.co/name: filebeat-syslog
---
apiVersion: v1
kind: Service
metadata:
name: filebeat-syslog-tcp
annotations:
external-dns.alpha.kubernetes.io/hostname: syslog.k-space.ee
metallb.universe.tf/allow-shared-ip: syslog.k-space.ee
spec:
type: LoadBalancer
externalTrafficPolicy: Local
loadBalancerIP: 172.20.51.4
ports:
- name: filebeat-syslog
port: 514
protocol: TCP
targetPort: 1514
selector:
beat.k8s.elastic.co/name: filebeat-syslog
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
@ -148,12 +218,10 @@ kind: Elasticsearch
metadata:
name: elasticsearch
spec:
version: 8.4.1
version: 8.4.3
nodeSets:
- name: default
count: 3
config:
node.store.allow_mmap: false
count: 1
volumeClaimTemplates:
- metadata:
name: elasticsearch-data
@ -163,7 +231,7 @@ spec:
resources:
requests:
storage: 5Gi
storageClassName: local-path
storageClassName: longhorn
http:
tls:
selfSignedCertificate:
@ -174,8 +242,8 @@ kind: Kibana
metadata:
name: kibana
spec:
version: 8.4.1
count: 2
version: 8.4.3
count: 1
elasticsearchRef:
name: elasticsearch
http:
@ -196,6 +264,23 @@ spec:
entries:
- key: elastic
path: xpack.security.authc.providers.anonymous.anonymous1.credentials.password
podTemplate:
metadata:
annotations:
co.elastic.logs/enabled: 'false'
spec:
containers:
- name: kibana
readinessProbe:
httpGet:
path: /app/home
port: 5601
scheme: HTTP
initialDelaySeconds: 10
timeoutSeconds: 5
periodSeconds: 10
successThreshold: 1
failureThreshold: 3
---
apiVersion: networking.k8s.io/v1
kind: Ingress
@ -203,7 +288,6 @@ metadata:
name: kibana
annotations:
kubernetes.io/ingress.class: traefik
cert-manager.io/cluster-issuer: default
traefik.ingress.kubernetes.io/router.entrypoints: websecure
traefik.ingress.kubernetes.io/router.middlewares: traefik-sso@kubernetescrd
traefik.ingress.kubernetes.io/router.tls: "true"
@ -222,5 +306,26 @@ spec:
number: 5601
tls:
- hosts:
- kibana.k-space.ee
secretName: kibana-tls
- "*.k-space.ee"
---
apiVersion: monitoring.coreos.com/v1
kind: PodMonitor
metadata:
name: filebeat
spec:
selector:
matchLabels:
beat.k8s.elastic.co/name: filebeat
podMetricsEndpoints:
- port: exporter
---
apiVersion: monitoring.coreos.com/v1
kind: PodMonitor
metadata:
name: elasticsearch
spec:
selector:
matchLabels:
app.kubernetes.io/name: elasticsearch-exporter
podMetricsEndpoints:
- port: exporter

View File

@ -3,12 +3,12 @@ apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.8.0
controller-gen.kubebuilder.io/version: v0.9.1
creationTimestamp: null
labels:
app.kubernetes.io/instance: 'elastic-operator'
app.kubernetes.io/name: 'eck-operator-crds'
app.kubernetes.io/version: '2.2.0'
app.kubernetes.io/version: '2.4.0'
name: agents.agent.k8s.elastic.co
spec:
group: agent.k8s.elastic.co
@ -203,7 +203,7 @@ spec:
description: Spec is the specification of the service.
properties:
allocateLoadBalancerNodePorts:
description: allocateLoadBalancerNodePorts defines if NodePorts will be automatically allocated for services with type LoadBalancer. Default is "true". It may be set to "false" if the cluster load-balancer does not rely on NodePorts. If the caller requests specific NodePorts (by specifying a value), those requests will be respected, regardless of this field. This field may only be set for services with type LoadBalancer and will be cleared if the type is changed to any other type. This field is beta-level and is only honored by servers that enable the ServiceLBNodePortControl feature.
description: allocateLoadBalancerNodePorts defines if NodePorts will be automatically allocated for services with type LoadBalancer. Default is "true". It may be set to "false" if the cluster load-balancer does not rely on NodePorts. If the caller requests specific NodePorts (by specifying a value), those requests will be respected, regardless of this field. This field may only be set for services with type LoadBalancer and will be cleared if the type is changed to any other type.
type: boolean
clusterIP:
description: 'clusterIP is the IP address of the service and is usually assigned randomly. If an address is specified manually, is in-range (as per system configuration), and is not in use, it will be allocated to the service; otherwise creation of the service will fail. This field may not be changed through updates unless the type field is also being changed to ExternalName (which requires this field to be blank) or the type field is being changed from ExternalName (in which case this field may optionally be specified, as describe above). Valid values are "None", empty string (""), or a valid IP address. Setting this to "None" makes a "headless service" (no virtual IP), which is useful when direct endpoint connections are preferred and proxying is not required. Only applies to types ClusterIP, NodePort, and LoadBalancer. If this field is specified when creating a Service of type ExternalName, creation will fail. This field will be wiped when updating a Service to type ExternalName. More info: https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies'
@ -246,7 +246,7 @@ spec:
description: loadBalancerClass is the class of the load balancer implementation this Service belongs to. If specified, the value of this field must be a label-style identifier, with an optional prefix, e.g. "internal-vip" or "example.com/internal-vip". Unprefixed names are reserved for end-users. This field can only be set when the Service type is 'LoadBalancer'. If not set, the default load balancer implementation is used, today this is typically done through the cloud provider integration, but should apply for any default implementation. If set, it is assumed that a load balancer implementation is watching for Services with a matching class. Any default load balancer implementation (e.g. cloud providers) should ignore Services that set this field. This field can only be set when creating or updating a Service to type 'LoadBalancer'. Once set, it can not be changed. This field will be wiped when a service is updated to a non 'LoadBalancer' type.
type: string
loadBalancerIP:
description: 'Only applies to Service Type: LoadBalancer LoadBalancer will get created with the IP specified in this field. This feature depends on whether the underlying cloud-provider supports specifying the loadBalancerIP when a load balancer is created. This field will be ignored if the cloud-provider does not support the feature.'
description: 'Only applies to Service Type: LoadBalancer. This feature depends on whether the underlying cloud-provider supports specifying the loadBalancerIP when a load balancer is created. This field will be ignored if the cloud-provider does not support the feature. Deprecated: This field was under-specified and its meaning varies across implementations, and it cannot support dual-stack. As of Kubernetes v1.24, users are encouraged to use implementation-specific annotations when available. This field may be removed in a future API version.'
type: string
loadBalancerSourceRanges:
description: 'If specified and supported by the platform, this will restrict traffic through the cloud-provider load-balancer will be restricted to the specified client IPs. This field will be ignored if the cloud-provider does not support the feature." More info: https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/'
@ -259,7 +259,7 @@ spec:
description: ServicePort contains information on service's port.
properties:
appProtocol:
description: The application protocol for this port. This field follows standard Kubernetes label syntax. Un-prefixed names are reserved for IANA standard service names (as per RFC-6335 and http://www.iana.org/assignments/service-names). Non-standard protocols should use prefixed names such as mycompany.com/my-custom-protocol.
description: The application protocol for this port. This field follows standard Kubernetes label syntax. Un-prefixed names are reserved for IANA standard service names (as per RFC-6335 and https://www.iana.org/assignments/service-names). Non-standard protocols should use prefixed names such as mycompany.com/my-custom-protocol.
type: string
name:
description: The name of this port within the service. This must be a DNS_LABEL. All ports within a ServiceSpec must have unique names. When considering the endpoints for a Service, this must match the 'name' field in the EndpointPort. Optional if only one ServicePort is defined on this service.
@ -376,6 +376,13 @@ spec:
- standalone
- fleet
type: string
policyID:
description: PolicyID optionally determines into which Agent Policy this Agent will be enrolled. If left empty the default policy will be used.
type: string
revisionHistoryLimit:
description: RevisionHistoryLimit is the number of revisions to retain to allow rollback in the underlying DaemonSet or Deployment.
format: int32
type: integer
secureSettings:
description: SecureSettings is a list of references to Kubernetes Secrets containing sensitive configuration options for the Agent. Secrets data can be then referenced in the Agent config using the Secret's keys or as specified in `Entries` field of each SecureSetting.
items:
@ -448,24 +455,18 @@ spec:
storage: true
subresources:
status: {}
status:
acceptedNames:
kind: ""
plural: ""
conditions: []
storedVersions: []
---
# Source: eck-operator-crds/templates/all-crds.yaml
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.8.0
controller-gen.kubebuilder.io/version: v0.9.1
creationTimestamp: null
labels:
app.kubernetes.io/instance: 'elastic-operator'
app.kubernetes.io/name: 'eck-operator-crds'
app.kubernetes.io/version: '2.2.0'
app.kubernetes.io/version: '2.4.0'
name: apmservers.apm.k8s.elastic.co
spec:
group: apm.k8s.elastic.co
@ -565,7 +566,7 @@ spec:
description: Spec is the specification of the service.
properties:
allocateLoadBalancerNodePorts:
description: allocateLoadBalancerNodePorts defines if NodePorts will be automatically allocated for services with type LoadBalancer. Default is "true". It may be set to "false" if the cluster load-balancer does not rely on NodePorts. If the caller requests specific NodePorts (by specifying a value), those requests will be respected, regardless of this field. This field may only be set for services with type LoadBalancer and will be cleared if the type is changed to any other type. This field is beta-level and is only honored by servers that enable the ServiceLBNodePortControl feature.
description: allocateLoadBalancerNodePorts defines if NodePorts will be automatically allocated for services with type LoadBalancer. Default is "true". It may be set to "false" if the cluster load-balancer does not rely on NodePorts. If the caller requests specific NodePorts (by specifying a value), those requests will be respected, regardless of this field. This field may only be set for services with type LoadBalancer and will be cleared if the type is changed to any other type.
type: boolean
clusterIP:
description: 'clusterIP is the IP address of the service and is usually assigned randomly. If an address is specified manually, is in-range (as per system configuration), and is not in use, it will be allocated to the service; otherwise creation of the service will fail. This field may not be changed through updates unless the type field is also being changed to ExternalName (which requires this field to be blank) or the type field is being changed from ExternalName (in which case this field may optionally be specified, as describe above). Valid values are "None", empty string (""), or a valid IP address. Setting this to "None" makes a "headless service" (no virtual IP), which is useful when direct endpoint connections are preferred and proxying is not required. Only applies to types ClusterIP, NodePort, and LoadBalancer. If this field is specified when creating a Service of type ExternalName, creation will fail. This field will be wiped when updating a Service to type ExternalName. More info: https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies'
@ -608,7 +609,7 @@ spec:
description: loadBalancerClass is the class of the load balancer implementation this Service belongs to. If specified, the value of this field must be a label-style identifier, with an optional prefix, e.g. "internal-vip" or "example.com/internal-vip". Unprefixed names are reserved for end-users. This field can only be set when the Service type is 'LoadBalancer'. If not set, the default load balancer implementation is used, today this is typically done through the cloud provider integration, but should apply for any default implementation. If set, it is assumed that a load balancer implementation is watching for Services with a matching class. Any default load balancer implementation (e.g. cloud providers) should ignore Services that set this field. This field can only be set when creating or updating a Service to type 'LoadBalancer'. Once set, it can not be changed. This field will be wiped when a service is updated to a non 'LoadBalancer' type.
type: string
loadBalancerIP:
description: 'Only applies to Service Type: LoadBalancer LoadBalancer will get created with the IP specified in this field. This feature depends on whether the underlying cloud-provider supports specifying the loadBalancerIP when a load balancer is created. This field will be ignored if the cloud-provider does not support the feature.'
description: 'Only applies to Service Type: LoadBalancer. This feature depends on whether the underlying cloud-provider supports specifying the loadBalancerIP when a load balancer is created. This field will be ignored if the cloud-provider does not support the feature. Deprecated: This field was under-specified and its meaning varies across implementations, and it cannot support dual-stack. As of Kubernetes v1.24, users are encouraged to use implementation-specific annotations when available. This field may be removed in a future API version.'
type: string
loadBalancerSourceRanges:
description: 'If specified and supported by the platform, this will restrict traffic through the cloud-provider load-balancer will be restricted to the specified client IPs. This field will be ignored if the cloud-provider does not support the feature." More info: https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/'
@ -621,7 +622,7 @@ spec:
description: ServicePort contains information on service's port.
properties:
appProtocol:
description: The application protocol for this port. This field follows standard Kubernetes label syntax. Un-prefixed names are reserved for IANA standard service names (as per RFC-6335 and http://www.iana.org/assignments/service-names). Non-standard protocols should use prefixed names such as mycompany.com/my-custom-protocol.
description: The application protocol for this port. This field follows standard Kubernetes label syntax. Un-prefixed names are reserved for IANA standard service names (as per RFC-6335 and https://www.iana.org/assignments/service-names). Non-standard protocols should use prefixed names such as mycompany.com/my-custom-protocol.
type: string
name:
description: The name of this port within the service. This must be a DNS_LABEL. All ports within a ServiceSpec must have unique names. When considering the endpoints for a Service, this must match the 'name' field in the EndpointPort. Optional if only one ServicePort is defined on this service.
@ -736,6 +737,10 @@ spec:
description: PodTemplate provides customisation options (labels, annotations, affinity rules, resource requests, and so on) for the APM Server pods.
type: object
x-kubernetes-preserve-unknown-fields: true
revisionHistoryLimit:
description: RevisionHistoryLimit is the number of revisions to retain to allow rollback in the underlying Deployment.
format: int32
type: integer
secureSettings:
description: SecureSettings is a list of references to Kubernetes secrets containing sensitive configuration options for APM Server.
items:
@ -792,6 +797,10 @@ spec:
kibanaAssociationStatus:
description: KibanaAssociationStatus is the status of any auto-linking to Kibana.
type: string
observedGeneration:
description: ObservedGeneration represents the .metadata.generation that the status is based upon. It corresponds to the metadata generation, which is updated on mutation by the API Server. If the generation observed in status diverges from the generation in metadata, the APM Server controller has not yet processed the changes contained in the APM Server specification.
format: int64
type: integer
secretTokenSecret:
description: SecretTokenSecretName is the name of the Secret that contains the secret token
type: string
@ -895,7 +904,7 @@ spec:
description: Spec is the specification of the service.
properties:
allocateLoadBalancerNodePorts:
description: allocateLoadBalancerNodePorts defines if NodePorts will be automatically allocated for services with type LoadBalancer. Default is "true". It may be set to "false" if the cluster load-balancer does not rely on NodePorts. If the caller requests specific NodePorts (by specifying a value), those requests will be respected, regardless of this field. This field may only be set for services with type LoadBalancer and will be cleared if the type is changed to any other type. This field is beta-level and is only honored by servers that enable the ServiceLBNodePortControl feature.
description: allocateLoadBalancerNodePorts defines if NodePorts will be automatically allocated for services with type LoadBalancer. Default is "true". It may be set to "false" if the cluster load-balancer does not rely on NodePorts. If the caller requests specific NodePorts (by specifying a value), those requests will be respected, regardless of this field. This field may only be set for services with type LoadBalancer and will be cleared if the type is changed to any other type.
type: boolean
clusterIP:
description: 'clusterIP is the IP address of the service and is usually assigned randomly. If an address is specified manually, is in-range (as per system configuration), and is not in use, it will be allocated to the service; otherwise creation of the service will fail. This field may not be changed through updates unless the type field is also being changed to ExternalName (which requires this field to be blank) or the type field is being changed from ExternalName (in which case this field may optionally be specified, as describe above). Valid values are "None", empty string (""), or a valid IP address. Setting this to "None" makes a "headless service" (no virtual IP), which is useful when direct endpoint connections are preferred and proxying is not required. Only applies to types ClusterIP, NodePort, and LoadBalancer. If this field is specified when creating a Service of type ExternalName, creation will fail. This field will be wiped when updating a Service to type ExternalName. More info: https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies'
@ -938,7 +947,7 @@ spec:
description: loadBalancerClass is the class of the load balancer implementation this Service belongs to. If specified, the value of this field must be a label-style identifier, with an optional prefix, e.g. "internal-vip" or "example.com/internal-vip". Unprefixed names are reserved for end-users. This field can only be set when the Service type is 'LoadBalancer'. If not set, the default load balancer implementation is used, today this is typically done through the cloud provider integration, but should apply for any default implementation. If set, it is assumed that a load balancer implementation is watching for Services with a matching class. Any default load balancer implementation (e.g. cloud providers) should ignore Services that set this field. This field can only be set when creating or updating a Service to type 'LoadBalancer'. Once set, it can not be changed. This field will be wiped when a service is updated to a non 'LoadBalancer' type.
type: string
loadBalancerIP:
description: 'Only applies to Service Type: LoadBalancer LoadBalancer will get created with the IP specified in this field. This feature depends on whether the underlying cloud-provider supports specifying the loadBalancerIP when a load balancer is created. This field will be ignored if the cloud-provider does not support the feature.'
description: 'Only applies to Service Type: LoadBalancer. This feature depends on whether the underlying cloud-provider supports specifying the loadBalancerIP when a load balancer is created. This field will be ignored if the cloud-provider does not support the feature. Deprecated: This field was under-specified and its meaning varies across implementations, and it cannot support dual-stack. As of Kubernetes v1.24, users are encouraged to use implementation-specific annotations when available. This field may be removed in a future API version.'
type: string
loadBalancerSourceRanges:
description: 'If specified and supported by the platform, this will restrict traffic through the cloud-provider load-balancer will be restricted to the specified client IPs. This field will be ignored if the cloud-provider does not support the feature." More info: https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/'
@ -951,7 +960,7 @@ spec:
description: ServicePort contains information on service's port.
properties:
appProtocol:
description: The application protocol for this port. This field follows standard Kubernetes label syntax. Un-prefixed names are reserved for IANA standard service names (as per RFC-6335 and http://www.iana.org/assignments/service-names). Non-standard protocols should use prefixed names such as mycompany.com/my-custom-protocol.
description: The application protocol for this port. This field follows standard Kubernetes label syntax. Un-prefixed names are reserved for IANA standard service names (as per RFC-6335 and https://www.iana.org/assignments/service-names). Non-standard protocols should use prefixed names such as mycompany.com/my-custom-protocol.
type: string
name:
description: The name of this port within the service. This must be a DNS_LABEL. All ports within a ServiceSpec must have unique names. When considering the endpoints for a Service, this must match the 'name' field in the EndpointPort. Optional if only one ServicePort is defined on this service.
@ -1112,24 +1121,18 @@ spec:
type: object
served: false
storage: false
status:
acceptedNames:
kind: ""
plural: ""
conditions: []
storedVersions: []
---
# Source: eck-operator-crds/templates/all-crds.yaml
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.8.0
controller-gen.kubebuilder.io/version: v0.9.1
creationTimestamp: null
labels:
app.kubernetes.io/instance: 'elastic-operator'
app.kubernetes.io/name: 'eck-operator-crds'
app.kubernetes.io/version: '2.2.0'
app.kubernetes.io/version: '2.4.0'
name: beats.beat.k8s.elastic.co
spec:
group: beat.k8s.elastic.co
@ -1294,6 +1297,10 @@ spec:
description: ServiceName is the name of an existing Kubernetes service which is used to make requests to the referenced object. It has to be in the same namespace as the referenced resource. If left empty, the default HTTP service of the referenced resource is used.
type: string
type: object
revisionHistoryLimit:
description: RevisionHistoryLimit is the number of revisions to retain to allow rollback in the underlying DaemonSet or Deployment.
format: int32
type: integer
secureSettings:
description: SecureSettings is a list of references to Kubernetes Secrets containing sensitive configuration options for the Beat. Secrets data can be then referenced in the Beat config using the Secret's keys or as specified in `Entries` field of each SecureSetting.
items:
@ -1353,6 +1360,10 @@ spec:
kibanaAssociationStatus:
description: AssociationStatus is the status of an association resource.
type: string
observedGeneration:
description: ObservedGeneration represents the .metadata.generation that the status is based upon. It corresponds to the metadata generation, which is updated on mutation by the API Server. If the generation observed in status diverges from the generation in metadata, the Beats controller has not yet processed the changes contained in the Beats specification.
format: int64
type: integer
version:
description: 'Version of the stack resource currently running. During version upgrades, multiple versions may run in parallel: this value specifies the lowest version currently running.'
type: string
@ -1362,24 +1373,18 @@ spec:
storage: true
subresources:
status: {}
status:
acceptedNames:
kind: ""
plural: ""
conditions: []
storedVersions: []
---
# Source: eck-operator-crds/templates/all-crds.yaml
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.8.0
controller-gen.kubebuilder.io/version: v0.9.1
creationTimestamp: null
labels:
app.kubernetes.io/instance: 'elastic-operator'
app.kubernetes.io/name: 'eck-operator-crds'
app.kubernetes.io/version: '2.2.0'
app.kubernetes.io/version: '2.4.0'
name: elasticmapsservers.maps.k8s.elastic.co
spec:
group: maps.k8s.elastic.co
@ -1486,7 +1491,7 @@ spec:
description: Spec is the specification of the service.
properties:
allocateLoadBalancerNodePorts:
description: allocateLoadBalancerNodePorts defines if NodePorts will be automatically allocated for services with type LoadBalancer. Default is "true". It may be set to "false" if the cluster load-balancer does not rely on NodePorts. If the caller requests specific NodePorts (by specifying a value), those requests will be respected, regardless of this field. This field may only be set for services with type LoadBalancer and will be cleared if the type is changed to any other type. This field is beta-level and is only honored by servers that enable the ServiceLBNodePortControl feature.
description: allocateLoadBalancerNodePorts defines if NodePorts will be automatically allocated for services with type LoadBalancer. Default is "true". It may be set to "false" if the cluster load-balancer does not rely on NodePorts. If the caller requests specific NodePorts (by specifying a value), those requests will be respected, regardless of this field. This field may only be set for services with type LoadBalancer and will be cleared if the type is changed to any other type.
type: boolean
clusterIP:
description: 'clusterIP is the IP address of the service and is usually assigned randomly. If an address is specified manually, is in-range (as per system configuration), and is not in use, it will be allocated to the service; otherwise creation of the service will fail. This field may not be changed through updates unless the type field is also being changed to ExternalName (which requires this field to be blank) or the type field is being changed from ExternalName (in which case this field may optionally be specified, as describe above). Valid values are "None", empty string (""), or a valid IP address. Setting this to "None" makes a "headless service" (no virtual IP), which is useful when direct endpoint connections are preferred and proxying is not required. Only applies to types ClusterIP, NodePort, and LoadBalancer. If this field is specified when creating a Service of type ExternalName, creation will fail. This field will be wiped when updating a Service to type ExternalName. More info: https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies'
@ -1529,7 +1534,7 @@ spec:
description: loadBalancerClass is the class of the load balancer implementation this Service belongs to. If specified, the value of this field must be a label-style identifier, with an optional prefix, e.g. "internal-vip" or "example.com/internal-vip". Unprefixed names are reserved for end-users. This field can only be set when the Service type is 'LoadBalancer'. If not set, the default load balancer implementation is used, today this is typically done through the cloud provider integration, but should apply for any default implementation. If set, it is assumed that a load balancer implementation is watching for Services with a matching class. Any default load balancer implementation (e.g. cloud providers) should ignore Services that set this field. This field can only be set when creating or updating a Service to type 'LoadBalancer'. Once set, it can not be changed. This field will be wiped when a service is updated to a non 'LoadBalancer' type.
type: string
loadBalancerIP:
description: 'Only applies to Service Type: LoadBalancer LoadBalancer will get created with the IP specified in this field. This feature depends on whether the underlying cloud-provider supports specifying the loadBalancerIP when a load balancer is created. This field will be ignored if the cloud-provider does not support the feature.'
description: 'Only applies to Service Type: LoadBalancer. This feature depends on whether the underlying cloud-provider supports specifying the loadBalancerIP when a load balancer is created. This field will be ignored if the cloud-provider does not support the feature. Deprecated: This field was under-specified and its meaning varies across implementations, and it cannot support dual-stack. As of Kubernetes v1.24, users are encouraged to use implementation-specific annotations when available. This field may be removed in a future API version.'
type: string
loadBalancerSourceRanges:
description: 'If specified and supported by the platform, this will restrict traffic through the cloud-provider load-balancer will be restricted to the specified client IPs. This field will be ignored if the cloud-provider does not support the feature." More info: https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/'
@ -1542,7 +1547,7 @@ spec:
description: ServicePort contains information on service's port.
properties:
appProtocol:
description: The application protocol for this port. This field follows standard Kubernetes label syntax. Un-prefixed names are reserved for IANA standard service names (as per RFC-6335 and http://www.iana.org/assignments/service-names). Non-standard protocols should use prefixed names such as mycompany.com/my-custom-protocol.
description: The application protocol for this port. This field follows standard Kubernetes label syntax. Un-prefixed names are reserved for IANA standard service names (as per RFC-6335 and https://www.iana.org/assignments/service-names). Non-standard protocols should use prefixed names such as mycompany.com/my-custom-protocol.
type: string
name:
description: The name of this port within the service. This must be a DNS_LABEL. All ports within a ServiceSpec must have unique names. When considering the endpoints for a Service, this must match the 'name' field in the EndpointPort. Optional if only one ServicePort is defined on this service.
@ -1641,6 +1646,10 @@ spec:
description: PodTemplate provides customisation options (labels, annotations, affinity rules, resource requests, and so on) for the Elastic Maps Server pods
type: object
x-kubernetes-preserve-unknown-fields: true
revisionHistoryLimit:
description: RevisionHistoryLimit is the number of revisions to retain to allow rollback in the underlying Deployment.
format: int32
type: integer
serviceAccountName:
description: ServiceAccountName is used to check access from the current resource to a resource (for ex. Elasticsearch) in a different namespace. Can only be used if ECK is enforcing RBAC on references.
type: string
@ -1667,6 +1676,10 @@ spec:
health:
description: Health of the deployment.
type: string
observedGeneration:
description: ObservedGeneration is the most recent generation observed for this Elastic Maps Server. It corresponds to the metadata generation, which is updated on mutation by the API Server. If the generation observed in status diverges from the generation in metadata, the Elastic Maps controller has not yet processed the changes contained in the Elastic Maps specification.
format: int64
type: integer
selector:
description: Selector is the label selector used to find all pods.
type: string
@ -1683,24 +1696,18 @@ spec:
specReplicasPath: .spec.count
statusReplicasPath: .status.count
status: {}
status:
acceptedNames:
kind: ""
plural: ""
conditions: []
storedVersions: []
---
# Source: eck-operator-crds/templates/all-crds.yaml
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.8.0
controller-gen.kubebuilder.io/version: v0.9.1
creationTimestamp: null
labels:
app.kubernetes.io/instance: 'elastic-operator'
app.kubernetes.io/name: 'eck-operator-crds'
app.kubernetes.io/version: '2.2.0'
app.kubernetes.io/version: '2.4.0'
name: elasticsearches.elasticsearch.k8s.elastic.co
spec:
group: elasticsearch.k8s.elastic.co
@ -1803,7 +1810,7 @@ spec:
description: Spec is the specification of the service.
properties:
allocateLoadBalancerNodePorts:
description: allocateLoadBalancerNodePorts defines if NodePorts will be automatically allocated for services with type LoadBalancer. Default is "true". It may be set to "false" if the cluster load-balancer does not rely on NodePorts. If the caller requests specific NodePorts (by specifying a value), those requests will be respected, regardless of this field. This field may only be set for services with type LoadBalancer and will be cleared if the type is changed to any other type. This field is beta-level and is only honored by servers that enable the ServiceLBNodePortControl feature.
description: allocateLoadBalancerNodePorts defines if NodePorts will be automatically allocated for services with type LoadBalancer. Default is "true". It may be set to "false" if the cluster load-balancer does not rely on NodePorts. If the caller requests specific NodePorts (by specifying a value), those requests will be respected, regardless of this field. This field may only be set for services with type LoadBalancer and will be cleared if the type is changed to any other type.
type: boolean
clusterIP:
description: 'clusterIP is the IP address of the service and is usually assigned randomly. If an address is specified manually, is in-range (as per system configuration), and is not in use, it will be allocated to the service; otherwise creation of the service will fail. This field may not be changed through updates unless the type field is also being changed to ExternalName (which requires this field to be blank) or the type field is being changed from ExternalName (in which case this field may optionally be specified, as describe above). Valid values are "None", empty string (""), or a valid IP address. Setting this to "None" makes a "headless service" (no virtual IP), which is useful when direct endpoint connections are preferred and proxying is not required. Only applies to types ClusterIP, NodePort, and LoadBalancer. If this field is specified when creating a Service of type ExternalName, creation will fail. This field will be wiped when updating a Service to type ExternalName. More info: https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies'
@ -1846,7 +1853,7 @@ spec:
description: loadBalancerClass is the class of the load balancer implementation this Service belongs to. If specified, the value of this field must be a label-style identifier, with an optional prefix, e.g. "internal-vip" or "example.com/internal-vip". Unprefixed names are reserved for end-users. This field can only be set when the Service type is 'LoadBalancer'. If not set, the default load balancer implementation is used, today this is typically done through the cloud provider integration, but should apply for any default implementation. If set, it is assumed that a load balancer implementation is watching for Services with a matching class. Any default load balancer implementation (e.g. cloud providers) should ignore Services that set this field. This field can only be set when creating or updating a Service to type 'LoadBalancer'. Once set, it can not be changed. This field will be wiped when a service is updated to a non 'LoadBalancer' type.
type: string
loadBalancerIP:
description: 'Only applies to Service Type: LoadBalancer LoadBalancer will get created with the IP specified in this field. This feature depends on whether the underlying cloud-provider supports specifying the loadBalancerIP when a load balancer is created. This field will be ignored if the cloud-provider does not support the feature.'
description: 'Only applies to Service Type: LoadBalancer. This feature depends on whether the underlying cloud-provider supports specifying the loadBalancerIP when a load balancer is created. This field will be ignored if the cloud-provider does not support the feature. Deprecated: This field was under-specified and its meaning varies across implementations, and it cannot support dual-stack. As of Kubernetes v1.24, users are encouraged to use implementation-specific annotations when available. This field may be removed in a future API version.'
type: string
loadBalancerSourceRanges:
description: 'If specified and supported by the platform, this will restrict traffic through the cloud-provider load-balancer will be restricted to the specified client IPs. This field will be ignored if the cloud-provider does not support the feature." More info: https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/'
@ -1859,7 +1866,7 @@ spec:
description: ServicePort contains information on service's port.
properties:
appProtocol:
description: The application protocol for this port. This field follows standard Kubernetes label syntax. Un-prefixed names are reserved for IANA standard service names (as per RFC-6335 and http://www.iana.org/assignments/service-names). Non-standard protocols should use prefixed names such as mycompany.com/my-custom-protocol.
description: The application protocol for this port. This field follows standard Kubernetes label syntax. Un-prefixed names are reserved for IANA standard service names (as per RFC-6335 and https://www.iana.org/assignments/service-names). Non-standard protocols should use prefixed names such as mycompany.com/my-custom-protocol.
type: string
name:
description: The name of this port within the service. This must be a DNS_LABEL. All ports within a ServiceSpec must have unique names. When considering the endpoints for a Service, this must match the 'name' field in the EndpointPort. Optional if only one ServicePort is defined on this service.
@ -2058,15 +2065,15 @@ spec:
type: string
type: object
spec:
description: 'Spec defines the desired characteristics of a volume requested by a pod author. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims'
description: 'spec defines the desired characteristics of a volume requested by a pod author. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims'
properties:
accessModes:
description: 'AccessModes contains the desired access modes the volume should have. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1'
description: 'accessModes contains the desired access modes the volume should have. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1'
items:
type: string
type: array
dataSource:
description: 'This field can be used to specify either: * An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) * An existing PVC (PersistentVolumeClaim) If the provisioner or an external controller can support the specified data source, it will create a new volume based on the contents of the specified data source. If the AnyVolumeDataSource feature gate is enabled, this field will always have the same contents as the DataSourceRef field.'
description: 'dataSource field can be used to specify either: * An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) * An existing PVC (PersistentVolumeClaim) If the provisioner or an external controller can support the specified data source, it will create a new volume based on the contents of the specified data source. If the AnyVolumeDataSource feature gate is enabled, this field will always have the same contents as the DataSourceRef field.'
properties:
apiGroup:
description: APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required.
@ -2081,8 +2088,9 @@ spec:
- kind
- name
type: object
x-kubernetes-map-type: atomic
dataSourceRef:
description: 'Specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any local object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the DataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, both fields (DataSource and DataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. There are two important differences between DataSource and DataSourceRef: * While DataSource only allows two specific types of objects, DataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While DataSource ignores disallowed values (dropping them), DataSourceRef preserves all values, and generates an error if a disallowed value is specified. (Alpha) Using this field requires the AnyVolumeDataSource feature gate to be enabled.'
description: 'dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any local object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the DataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, both fields (DataSource and DataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. There are two important differences between DataSource and DataSourceRef: * While DataSource only allows two specific types of objects, DataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While DataSource ignores disallowed values (dropping them), DataSourceRef preserves all values, and generates an error if a disallowed value is specified. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled.'
properties:
apiGroup:
description: APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required.
@ -2097,8 +2105,9 @@ spec:
- kind
- name
type: object
x-kubernetes-map-type: atomic
resources:
description: 'Resources represents the minimum resources the volume should have. If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements that are lower than previous value but must still be higher than capacity recorded in the status field of the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources'
description: 'resources represents the minimum resources the volume should have. If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements that are lower than previous value but must still be higher than capacity recorded in the status field of the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources'
properties:
limits:
additionalProperties:
@ -2120,7 +2129,7 @@ spec:
type: object
type: object
selector:
description: A label query over volumes to consider for binding.
description: selector is a label query over volumes to consider for binding.
properties:
matchExpressions:
description: matchExpressions is a list of label selector requirements. The requirements are ANDed.
@ -2149,21 +2158,22 @@ spec:
description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
type: object
type: object
x-kubernetes-map-type: atomic
storageClassName:
description: 'Name of the StorageClass required by the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1'
description: 'storageClassName is the name of the StorageClass required by the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1'
type: string
volumeMode:
description: volumeMode defines what type of volume is required by the claim. Value of Filesystem is implied when not included in claim spec.
type: string
volumeName:
description: VolumeName is the binding reference to the PersistentVolume backing this claim.
description: volumeName is the binding reference to the PersistentVolume backing this claim.
type: string
type: object
status:
description: 'Status represents the current information/status of a persistent volume claim. Read-only. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims'
description: 'status represents the current information/status of a persistent volume claim. Read-only. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims'
properties:
accessModes:
description: 'AccessModes contains the actual access modes the volume backing the PVC has. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1'
description: 'accessModes contains the actual access modes the volume backing the PVC has. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1'
items:
type: string
type: array
@ -2174,7 +2184,7 @@ spec:
- type: string
pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
x-kubernetes-int-or-string: true
description: The storage resource within AllocatedResources tracks the capacity allocated to a PVC. It may be larger than the actual capacity when a volume expansion operation is requested. For storage quota, the larger value from allocatedResources and PVC.spec.resources is used. If allocatedResources is not set, PVC.spec.resources alone is used for quota calculation. If a volume expansion capacity request is lowered, allocatedResources is only lowered if there are no expansion operations in progress and if the actual volume capacity is equal or lower than the requested capacity. This is an alpha field and requires enabling RecoverVolumeExpansionFailure feature.
description: allocatedResources is the storage resource within AllocatedResources tracks the capacity allocated to a PVC. It may be larger than the actual capacity when a volume expansion operation is requested. For storage quota, the larger value from allocatedResources and PVC.spec.resources is used. If allocatedResources is not set, PVC.spec.resources alone is used for quota calculation. If a volume expansion capacity request is lowered, allocatedResources is only lowered if there are no expansion operations in progress and if the actual volume capacity is equal or lower than the requested capacity. This is an alpha field and requires enabling RecoverVolumeExpansionFailure feature.
type: object
capacity:
additionalProperties:
@ -2183,26 +2193,26 @@ spec:
- type: string
pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
x-kubernetes-int-or-string: true
description: Represents the actual resources of the underlying volume.
description: capacity represents the actual resources of the underlying volume.
type: object
conditions:
description: Current Condition of persistent volume claim. If underlying persistent volume is being resized then the Condition will be set to 'ResizeStarted'.
description: conditions is the current Condition of persistent volume claim. If underlying persistent volume is being resized then the Condition will be set to 'ResizeStarted'.
items:
description: PersistentVolumeClaimCondition contails details about state of pvc
properties:
lastProbeTime:
description: Last time we probed the condition.
description: lastProbeTime is the time we probed the condition.
format: date-time
type: string
lastTransitionTime:
description: Last time the condition transitioned from one status to another.
description: lastTransitionTime is the time the condition transitioned from one status to another.
format: date-time
type: string
message:
description: Human-readable message indicating details about last transition.
description: message is the human-readable message indicating details about last transition.
type: string
reason:
description: Unique, this should be a short, machine understandable string that gives the reason for condition's last transition. If it reports "ResizeStarted" that means the underlying persistent volume is being resized.
description: reason is a unique, this should be a short, machine understandable string that gives the reason for condition's last transition. If it reports "ResizeStarted" that means the underlying persistent volume is being resized.
type: string
status:
type: string
@ -2215,10 +2225,10 @@ spec:
type: object
type: array
phase:
description: Phase represents the current phase of PersistentVolumeClaim.
description: phase represents the current phase of PersistentVolumeClaim.
type: string
resizeStatus:
description: ResizeStatus stores status of resize operation. ResizeStatus is not set by default but when expansion is complete resizeStatus is set to empty string by resize controller or kubelet. This is an alpha field and requires enabling RecoverVolumeExpansionFailure feature.
description: resizeStatus stores status of resize operation. ResizeStatus is not set by default but when expansion is complete resizeStatus is set to empty string by resize controller or kubelet. This is an alpha field and requires enabling RecoverVolumeExpansionFailure feature.
type: string
type: object
type: object
@ -2267,7 +2277,7 @@ spec:
description: An eviction is allowed if at least "minAvailable" pods selected by "selector" will still be available after the eviction, i.e. even in the absence of the evicted pod. So for example you can prevent all voluntary evictions by specifying "100%".
x-kubernetes-int-or-string: true
selector:
description: Label query over pods whose evictions are managed by the disruption budget. A null selector selects no pods. An empty selector ({}) also selects no pods, which differs from standard behavior of selecting all pods. In policy/v1, an empty selector will select all pods in the namespace.
description: Label query over pods whose evictions are managed by the disruption budget. A null selector will match no pods, while an empty ({}) selector will select all pods within the namespace.
properties:
matchExpressions:
description: matchExpressions is a list of label selector requirements. The requirements are ANDed.
@ -2296,6 +2306,7 @@ spec:
description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
type: object
type: object
x-kubernetes-map-type: atomic
type: object
type: object
remoteClusters:
@ -2324,6 +2335,10 @@ spec:
- name
type: object
type: array
revisionHistoryLimit:
description: RevisionHistoryLimit is the number of revisions to retain to allow rollback in the underlying StatefulSets.
format: int32
type: integer
secureSettings:
description: SecureSettings is a list of references to Kubernetes secrets containing sensitive configuration options for Elasticsearch.
items:
@ -2384,7 +2399,7 @@ spec:
description: Spec is the specification of the service.
properties:
allocateLoadBalancerNodePorts:
description: allocateLoadBalancerNodePorts defines if NodePorts will be automatically allocated for services with type LoadBalancer. Default is "true". It may be set to "false" if the cluster load-balancer does not rely on NodePorts. If the caller requests specific NodePorts (by specifying a value), those requests will be respected, regardless of this field. This field may only be set for services with type LoadBalancer and will be cleared if the type is changed to any other type. This field is beta-level and is only honored by servers that enable the ServiceLBNodePortControl feature.
description: allocateLoadBalancerNodePorts defines if NodePorts will be automatically allocated for services with type LoadBalancer. Default is "true". It may be set to "false" if the cluster load-balancer does not rely on NodePorts. If the caller requests specific NodePorts (by specifying a value), those requests will be respected, regardless of this field. This field may only be set for services with type LoadBalancer and will be cleared if the type is changed to any other type.
type: boolean
clusterIP:
description: 'clusterIP is the IP address of the service and is usually assigned randomly. If an address is specified manually, is in-range (as per system configuration), and is not in use, it will be allocated to the service; otherwise creation of the service will fail. This field may not be changed through updates unless the type field is also being changed to ExternalName (which requires this field to be blank) or the type field is being changed from ExternalName (in which case this field may optionally be specified, as describe above). Valid values are "None", empty string (""), or a valid IP address. Setting this to "None" makes a "headless service" (no virtual IP), which is useful when direct endpoint connections are preferred and proxying is not required. Only applies to types ClusterIP, NodePort, and LoadBalancer. If this field is specified when creating a Service of type ExternalName, creation will fail. This field will be wiped when updating a Service to type ExternalName. More info: https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies'
@ -2427,7 +2442,7 @@ spec:
description: loadBalancerClass is the class of the load balancer implementation this Service belongs to. If specified, the value of this field must be a label-style identifier, with an optional prefix, e.g. "internal-vip" or "example.com/internal-vip". Unprefixed names are reserved for end-users. This field can only be set when the Service type is 'LoadBalancer'. If not set, the default load balancer implementation is used, today this is typically done through the cloud provider integration, but should apply for any default implementation. If set, it is assumed that a load balancer implementation is watching for Services with a matching class. Any default load balancer implementation (e.g. cloud providers) should ignore Services that set this field. This field can only be set when creating or updating a Service to type 'LoadBalancer'. Once set, it can not be changed. This field will be wiped when a service is updated to a non 'LoadBalancer' type.
type: string
loadBalancerIP:
description: 'Only applies to Service Type: LoadBalancer LoadBalancer will get created with the IP specified in this field. This feature depends on whether the underlying cloud-provider supports specifying the loadBalancerIP when a load balancer is created. This field will be ignored if the cloud-provider does not support the feature.'
description: 'Only applies to Service Type: LoadBalancer. This feature depends on whether the underlying cloud-provider supports specifying the loadBalancerIP when a load balancer is created. This field will be ignored if the cloud-provider does not support the feature. Deprecated: This field was under-specified and its meaning varies across implementations, and it cannot support dual-stack. As of Kubernetes v1.24, users are encouraged to use implementation-specific annotations when available. This field may be removed in a future API version.'
type: string
loadBalancerSourceRanges:
description: 'If specified and supported by the platform, this will restrict traffic through the cloud-provider load-balancer will be restricted to the specified client IPs. This field will be ignored if the cloud-provider does not support the feature." More info: https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/'
@ -2440,7 +2455,7 @@ spec:
description: ServicePort contains information on service's port.
properties:
appProtocol:
description: The application protocol for this port. This field follows standard Kubernetes label syntax. Un-prefixed names are reserved for IANA standard service names (as per RFC-6335 and http://www.iana.org/assignments/service-names). Non-standard protocols should use prefixed names such as mycompany.com/my-custom-protocol.
description: The application protocol for this port. This field follows standard Kubernetes label syntax. Un-prefixed names are reserved for IANA standard service names (as per RFC-6335 and https://www.iana.org/assignments/service-names). Non-standard protocols should use prefixed names such as mycompany.com/my-custom-protocol.
type: string
name:
description: The name of this port within the service. This must be a DNS_LABEL. All ports within a ServiceSpec must have unique names. When considering the endpoints for a Service, this must match the 'name' field in the EndpointPort. Optional if only one ServicePort is defined on this service.
@ -2764,7 +2779,7 @@ spec:
description: Spec is the specification of the service.
properties:
allocateLoadBalancerNodePorts:
description: allocateLoadBalancerNodePorts defines if NodePorts will be automatically allocated for services with type LoadBalancer. Default is "true". It may be set to "false" if the cluster load-balancer does not rely on NodePorts. If the caller requests specific NodePorts (by specifying a value), those requests will be respected, regardless of this field. This field may only be set for services with type LoadBalancer and will be cleared if the type is changed to any other type. This field is beta-level and is only honored by servers that enable the ServiceLBNodePortControl feature.
description: allocateLoadBalancerNodePorts defines if NodePorts will be automatically allocated for services with type LoadBalancer. Default is "true". It may be set to "false" if the cluster load-balancer does not rely on NodePorts. If the caller requests specific NodePorts (by specifying a value), those requests will be respected, regardless of this field. This field may only be set for services with type LoadBalancer and will be cleared if the type is changed to any other type.
type: boolean
clusterIP:
description: 'clusterIP is the IP address of the service and is usually assigned randomly. If an address is specified manually, is in-range (as per system configuration), and is not in use, it will be allocated to the service; otherwise creation of the service will fail. This field may not be changed through updates unless the type field is also being changed to ExternalName (which requires this field to be blank) or the type field is being changed from ExternalName (in which case this field may optionally be specified, as describe above). Valid values are "None", empty string (""), or a valid IP address. Setting this to "None" makes a "headless service" (no virtual IP), which is useful when direct endpoint connections are preferred and proxying is not required. Only applies to types ClusterIP, NodePort, and LoadBalancer. If this field is specified when creating a Service of type ExternalName, creation will fail. This field will be wiped when updating a Service to type ExternalName. More info: https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies'
@ -2807,7 +2822,7 @@ spec:
description: loadBalancerClass is the class of the load balancer implementation this Service belongs to. If specified, the value of this field must be a label-style identifier, with an optional prefix, e.g. "internal-vip" or "example.com/internal-vip". Unprefixed names are reserved for end-users. This field can only be set when the Service type is 'LoadBalancer'. If not set, the default load balancer implementation is used, today this is typically done through the cloud provider integration, but should apply for any default implementation. If set, it is assumed that a load balancer implementation is watching for Services with a matching class. Any default load balancer implementation (e.g. cloud providers) should ignore Services that set this field. This field can only be set when creating or updating a Service to type 'LoadBalancer'. Once set, it can not be changed. This field will be wiped when a service is updated to a non 'LoadBalancer' type.
type: string
loadBalancerIP:
description: 'Only applies to Service Type: LoadBalancer LoadBalancer will get created with the IP specified in this field. This feature depends on whether the underlying cloud-provider supports specifying the loadBalancerIP when a load balancer is created. This field will be ignored if the cloud-provider does not support the feature.'
description: 'Only applies to Service Type: LoadBalancer. This feature depends on whether the underlying cloud-provider supports specifying the loadBalancerIP when a load balancer is created. This field will be ignored if the cloud-provider does not support the feature. Deprecated: This field was under-specified and its meaning varies across implementations, and it cannot support dual-stack. As of Kubernetes v1.24, users are encouraged to use implementation-specific annotations when available. This field may be removed in a future API version.'
type: string
loadBalancerSourceRanges:
description: 'If specified and supported by the platform, this will restrict traffic through the cloud-provider load-balancer will be restricted to the specified client IPs. This field will be ignored if the cloud-provider does not support the feature." More info: https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/'
@ -2820,7 +2835,7 @@ spec:
description: ServicePort contains information on service's port.
properties:
appProtocol:
description: The application protocol for this port. This field follows standard Kubernetes label syntax. Un-prefixed names are reserved for IANA standard service names (as per RFC-6335 and http://www.iana.org/assignments/service-names). Non-standard protocols should use prefixed names such as mycompany.com/my-custom-protocol.
description: The application protocol for this port. This field follows standard Kubernetes label syntax. Un-prefixed names are reserved for IANA standard service names (as per RFC-6335 and https://www.iana.org/assignments/service-names). Non-standard protocols should use prefixed names such as mycompany.com/my-custom-protocol.
type: string
name:
description: The name of this port within the service. This must be a DNS_LABEL. All ports within a ServiceSpec must have unique names. When considering the endpoints for a Service, this must match the 'name' field in the EndpointPort. Optional if only one ServicePort is defined on this service.
@ -2968,15 +2983,15 @@ spec:
type: string
type: object
spec:
description: 'Spec defines the desired characteristics of a volume requested by a pod author. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims'
description: 'spec defines the desired characteristics of a volume requested by a pod author. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims'
properties:
accessModes:
description: 'AccessModes contains the desired access modes the volume should have. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1'
description: 'accessModes contains the desired access modes the volume should have. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1'
items:
type: string
type: array
dataSource:
description: 'This field can be used to specify either: * An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) * An existing PVC (PersistentVolumeClaim) If the provisioner or an external controller can support the specified data source, it will create a new volume based on the contents of the specified data source. If the AnyVolumeDataSource feature gate is enabled, this field will always have the same contents as the DataSourceRef field.'
description: 'dataSource field can be used to specify either: * An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) * An existing PVC (PersistentVolumeClaim) If the provisioner or an external controller can support the specified data source, it will create a new volume based on the contents of the specified data source. If the AnyVolumeDataSource feature gate is enabled, this field will always have the same contents as the DataSourceRef field.'
properties:
apiGroup:
description: APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required.
@ -2991,8 +3006,9 @@ spec:
- kind
- name
type: object
x-kubernetes-map-type: atomic
dataSourceRef:
description: 'Specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any local object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the DataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, both fields (DataSource and DataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. There are two important differences between DataSource and DataSourceRef: * While DataSource only allows two specific types of objects, DataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While DataSource ignores disallowed values (dropping them), DataSourceRef preserves all values, and generates an error if a disallowed value is specified. (Alpha) Using this field requires the AnyVolumeDataSource feature gate to be enabled.'
description: 'dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any local object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the DataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, both fields (DataSource and DataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. There are two important differences between DataSource and DataSourceRef: * While DataSource only allows two specific types of objects, DataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While DataSource ignores disallowed values (dropping them), DataSourceRef preserves all values, and generates an error if a disallowed value is specified. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled.'
properties:
apiGroup:
description: APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required.
@ -3007,8 +3023,9 @@ spec:
- kind
- name
type: object
x-kubernetes-map-type: atomic
resources:
description: 'Resources represents the minimum resources the volume should have. If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements that are lower than previous value but must still be higher than capacity recorded in the status field of the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources'
description: 'resources represents the minimum resources the volume should have. If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements that are lower than previous value but must still be higher than capacity recorded in the status field of the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources'
properties:
limits:
additionalProperties:
@ -3030,7 +3047,7 @@ spec:
type: object
type: object
selector:
description: A label query over volumes to consider for binding.
description: selector is a label query over volumes to consider for binding.
properties:
matchExpressions:
description: matchExpressions is a list of label selector requirements. The requirements are ANDed.
@ -3059,21 +3076,22 @@ spec:
description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
type: object
type: object
x-kubernetes-map-type: atomic
storageClassName:
description: 'Name of the StorageClass required by the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1'
description: 'storageClassName is the name of the StorageClass required by the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1'
type: string
volumeMode:
description: volumeMode defines what type of volume is required by the claim. Value of Filesystem is implied when not included in claim spec.
type: string
volumeName:
description: VolumeName is the binding reference to the PersistentVolume backing this claim.
description: volumeName is the binding reference to the PersistentVolume backing this claim.
type: string
type: object
status:
description: 'Status represents the current information/status of a persistent volume claim. Read-only. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims'
description: 'status represents the current information/status of a persistent volume claim. Read-only. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims'
properties:
accessModes:
description: 'AccessModes contains the actual access modes the volume backing the PVC has. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1'
description: 'accessModes contains the actual access modes the volume backing the PVC has. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1'
items:
type: string
type: array
@ -3084,7 +3102,7 @@ spec:
- type: string
pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
x-kubernetes-int-or-string: true
description: The storage resource within AllocatedResources tracks the capacity allocated to a PVC. It may be larger than the actual capacity when a volume expansion operation is requested. For storage quota, the larger value from allocatedResources and PVC.spec.resources is used. If allocatedResources is not set, PVC.spec.resources alone is used for quota calculation. If a volume expansion capacity request is lowered, allocatedResources is only lowered if there are no expansion operations in progress and if the actual volume capacity is equal or lower than the requested capacity. This is an alpha field and requires enabling RecoverVolumeExpansionFailure feature.
description: allocatedResources is the storage resource within AllocatedResources tracks the capacity allocated to a PVC. It may be larger than the actual capacity when a volume expansion operation is requested. For storage quota, the larger value from allocatedResources and PVC.spec.resources is used. If allocatedResources is not set, PVC.spec.resources alone is used for quota calculation. If a volume expansion capacity request is lowered, allocatedResources is only lowered if there are no expansion operations in progress and if the actual volume capacity is equal or lower than the requested capacity. This is an alpha field and requires enabling RecoverVolumeExpansionFailure feature.
type: object
capacity:
additionalProperties:
@ -3093,26 +3111,26 @@ spec:
- type: string
pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
x-kubernetes-int-or-string: true
description: Represents the actual resources of the underlying volume.
description: capacity represents the actual resources of the underlying volume.
type: object
conditions:
description: Current Condition of persistent volume claim. If underlying persistent volume is being resized then the Condition will be set to 'ResizeStarted'.
description: conditions is the current Condition of persistent volume claim. If underlying persistent volume is being resized then the Condition will be set to 'ResizeStarted'.
items:
description: PersistentVolumeClaimCondition contails details about state of pvc
properties:
lastProbeTime:
description: Last time we probed the condition.
description: lastProbeTime is the time we probed the condition.
format: date-time
type: string
lastTransitionTime:
description: Last time the condition transitioned from one status to another.
description: lastTransitionTime is the time the condition transitioned from one status to another.
format: date-time
type: string
message:
description: Human-readable message indicating details about last transition.
description: message is the human-readable message indicating details about last transition.
type: string
reason:
description: Unique, this should be a short, machine understandable string that gives the reason for condition's last transition. If it reports "ResizeStarted" that means the underlying persistent volume is being resized.
description: reason is a unique, this should be a short, machine understandable string that gives the reason for condition's last transition. If it reports "ResizeStarted" that means the underlying persistent volume is being resized.
type: string
status:
type: string
@ -3125,10 +3143,10 @@ spec:
type: object
type: array
phase:
description: Phase represents the current phase of PersistentVolumeClaim.
description: phase represents the current phase of PersistentVolumeClaim.
type: string
resizeStatus:
description: ResizeStatus stores status of resize operation. ResizeStatus is not set by default but when expansion is complete resizeStatus is set to empty string by resize controller or kubelet. This is an alpha field and requires enabling RecoverVolumeExpansionFailure feature.
description: resizeStatus stores status of resize operation. ResizeStatus is not set by default but when expansion is complete resizeStatus is set to empty string by resize controller or kubelet. This is an alpha field and requires enabling RecoverVolumeExpansionFailure feature.
type: string
type: object
type: object
@ -3207,6 +3225,7 @@ spec:
description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
type: object
type: object
x-kubernetes-map-type: atomic
type: object
type: object
secureSettings:
@ -3283,24 +3302,18 @@ spec:
type: object
served: false
storage: false
status:
acceptedNames:
kind: ""
plural: ""
conditions: []
storedVersions: []
---
# Source: eck-operator-crds/templates/all-crds.yaml
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.8.0
controller-gen.kubebuilder.io/version: v0.9.1
creationTimestamp: null
labels:
app.kubernetes.io/instance: 'elastic-operator'
app.kubernetes.io/name: 'eck-operator-crds'
app.kubernetes.io/version: '2.2.0'
app.kubernetes.io/version: '2.4.0'
name: enterprisesearches.enterprisesearch.k8s.elastic.co
spec:
group: enterprisesearch.k8s.elastic.co
@ -3407,7 +3420,7 @@ spec:
description: Spec is the specification of the service.
properties:
allocateLoadBalancerNodePorts:
description: allocateLoadBalancerNodePorts defines if NodePorts will be automatically allocated for services with type LoadBalancer. Default is "true". It may be set to "false" if the cluster load-balancer does not rely on NodePorts. If the caller requests specific NodePorts (by specifying a value), those requests will be respected, regardless of this field. This field may only be set for services with type LoadBalancer and will be cleared if the type is changed to any other type. This field is beta-level and is only honored by servers that enable the ServiceLBNodePortControl feature.
description: allocateLoadBalancerNodePorts defines if NodePorts will be automatically allocated for services with type LoadBalancer. Default is "true". It may be set to "false" if the cluster load-balancer does not rely on NodePorts. If the caller requests specific NodePorts (by specifying a value), those requests will be respected, regardless of this field. This field may only be set for services with type LoadBalancer and will be cleared if the type is changed to any other type.
type: boolean
clusterIP:
description: 'clusterIP is the IP address of the service and is usually assigned randomly. If an address is specified manually, is in-range (as per system configuration), and is not in use, it will be allocated to the service; otherwise creation of the service will fail. This field may not be changed through updates unless the type field is also being changed to ExternalName (which requires this field to be blank) or the type field is being changed from ExternalName (in which case this field may optionally be specified, as describe above). Valid values are "None", empty string (""), or a valid IP address. Setting this to "None" makes a "headless service" (no virtual IP), which is useful when direct endpoint connections are preferred and proxying is not required. Only applies to types ClusterIP, NodePort, and LoadBalancer. If this field is specified when creating a Service of type ExternalName, creation will fail. This field will be wiped when updating a Service to type ExternalName. More info: https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies'
@ -3450,7 +3463,7 @@ spec:
description: loadBalancerClass is the class of the load balancer implementation this Service belongs to. If specified, the value of this field must be a label-style identifier, with an optional prefix, e.g. "internal-vip" or "example.com/internal-vip". Unprefixed names are reserved for end-users. This field can only be set when the Service type is 'LoadBalancer'. If not set, the default load balancer implementation is used, today this is typically done through the cloud provider integration, but should apply for any default implementation. If set, it is assumed that a load balancer implementation is watching for Services with a matching class. Any default load balancer implementation (e.g. cloud providers) should ignore Services that set this field. This field can only be set when creating or updating a Service to type 'LoadBalancer'. Once set, it can not be changed. This field will be wiped when a service is updated to a non 'LoadBalancer' type.
type: string
loadBalancerIP:
description: 'Only applies to Service Type: LoadBalancer LoadBalancer will get created with the IP specified in this field. This feature depends on whether the underlying cloud-provider supports specifying the loadBalancerIP when a load balancer is created. This field will be ignored if the cloud-provider does not support the feature.'
description: 'Only applies to Service Type: LoadBalancer. This feature depends on whether the underlying cloud-provider supports specifying the loadBalancerIP when a load balancer is created. This field will be ignored if the cloud-provider does not support the feature. Deprecated: This field was under-specified and its meaning varies across implementations, and it cannot support dual-stack. As of Kubernetes v1.24, users are encouraged to use implementation-specific annotations when available. This field may be removed in a future API version.'
type: string
loadBalancerSourceRanges:
description: 'If specified and supported by the platform, this will restrict traffic through the cloud-provider load-balancer will be restricted to the specified client IPs. This field will be ignored if the cloud-provider does not support the feature." More info: https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/'
@ -3463,7 +3476,7 @@ spec:
description: ServicePort contains information on service's port.
properties:
appProtocol:
description: The application protocol for this port. This field follows standard Kubernetes label syntax. Un-prefixed names are reserved for IANA standard service names (as per RFC-6335 and http://www.iana.org/assignments/service-names). Non-standard protocols should use prefixed names such as mycompany.com/my-custom-protocol.
description: The application protocol for this port. This field follows standard Kubernetes label syntax. Un-prefixed names are reserved for IANA standard service names (as per RFC-6335 and https://www.iana.org/assignments/service-names). Non-standard protocols should use prefixed names such as mycompany.com/my-custom-protocol.
type: string
name:
description: The name of this port within the service. This must be a DNS_LABEL. All ports within a ServiceSpec must have unique names. When considering the endpoints for a Service, this must match the 'name' field in the EndpointPort. Optional if only one ServicePort is defined on this service.
@ -3562,6 +3575,10 @@ spec:
description: PodTemplate provides customisation options (labels, annotations, affinity rules, resource requests, and so on) for the Enterprise Search pods.
type: object
x-kubernetes-preserve-unknown-fields: true
revisionHistoryLimit:
description: RevisionHistoryLimit is the number of revisions to retain to allow rollback in the underlying Deployment.
format: int32
type: integer
serviceAccountName:
description: ServiceAccountName is used to check access from the current resource to a resource (for ex. Elasticsearch) in a different namespace. Can only be used if ECK is enforcing RBAC on references.
type: string
@ -3586,6 +3603,10 @@ spec:
health:
description: Health of the deployment.
type: string
observedGeneration:
description: ObservedGeneration represents the .metadata.generation that the status is based upon. It corresponds to the metadata generation, which is updated on mutation by the API Server. If the generation observed in status diverges from the generation in metadata, the Enterprise Search controller has not yet processed the changes contained in the Enterprise Search specification.
format: int64
type: integer
selector:
description: Selector is the label selector used to find all pods.
type: string
@ -3697,7 +3718,7 @@ spec:
description: Spec is the specification of the service.
properties:
allocateLoadBalancerNodePorts:
description: allocateLoadBalancerNodePorts defines if NodePorts will be automatically allocated for services with type LoadBalancer. Default is "true". It may be set to "false" if the cluster load-balancer does not rely on NodePorts. If the caller requests specific NodePorts (by specifying a value), those requests will be respected, regardless of this field. This field may only be set for services with type LoadBalancer and will be cleared if the type is changed to any other type. This field is beta-level and is only honored by servers that enable the ServiceLBNodePortControl feature.
description: allocateLoadBalancerNodePorts defines if NodePorts will be automatically allocated for services with type LoadBalancer. Default is "true". It may be set to "false" if the cluster load-balancer does not rely on NodePorts. If the caller requests specific NodePorts (by specifying a value), those requests will be respected, regardless of this field. This field may only be set for services with type LoadBalancer and will be cleared if the type is changed to any other type.
type: boolean
clusterIP:
description: 'clusterIP is the IP address of the service and is usually assigned randomly. If an address is specified manually, is in-range (as per system configuration), and is not in use, it will be allocated to the service; otherwise creation of the service will fail. This field may not be changed through updates unless the type field is also being changed to ExternalName (which requires this field to be blank) or the type field is being changed from ExternalName (in which case this field may optionally be specified, as describe above). Valid values are "None", empty string (""), or a valid IP address. Setting this to "None" makes a "headless service" (no virtual IP), which is useful when direct endpoint connections are preferred and proxying is not required. Only applies to types ClusterIP, NodePort, and LoadBalancer. If this field is specified when creating a Service of type ExternalName, creation will fail. This field will be wiped when updating a Service to type ExternalName. More info: https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies'
@ -3740,7 +3761,7 @@ spec:
description: loadBalancerClass is the class of the load balancer implementation this Service belongs to. If specified, the value of this field must be a label-style identifier, with an optional prefix, e.g. "internal-vip" or "example.com/internal-vip". Unprefixed names are reserved for end-users. This field can only be set when the Service type is 'LoadBalancer'. If not set, the default load balancer implementation is used, today this is typically done through the cloud provider integration, but should apply for any default implementation. If set, it is assumed that a load balancer implementation is watching for Services with a matching class. Any default load balancer implementation (e.g. cloud providers) should ignore Services that set this field. This field can only be set when creating or updating a Service to type 'LoadBalancer'. Once set, it can not be changed. This field will be wiped when a service is updated to a non 'LoadBalancer' type.
type: string
loadBalancerIP:
description: 'Only applies to Service Type: LoadBalancer LoadBalancer will get created with the IP specified in this field. This feature depends on whether the underlying cloud-provider supports specifying the loadBalancerIP when a load balancer is created. This field will be ignored if the cloud-provider does not support the feature.'
description: 'Only applies to Service Type: LoadBalancer. This feature depends on whether the underlying cloud-provider supports specifying the loadBalancerIP when a load balancer is created. This field will be ignored if the cloud-provider does not support the feature. Deprecated: This field was under-specified and its meaning varies across implementations, and it cannot support dual-stack. As of Kubernetes v1.24, users are encouraged to use implementation-specific annotations when available. This field may be removed in a future API version.'
type: string
loadBalancerSourceRanges:
description: 'If specified and supported by the platform, this will restrict traffic through the cloud-provider load-balancer will be restricted to the specified client IPs. This field will be ignored if the cloud-provider does not support the feature." More info: https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/'
@ -3753,7 +3774,7 @@ spec:
description: ServicePort contains information on service's port.
properties:
appProtocol:
description: The application protocol for this port. This field follows standard Kubernetes label syntax. Un-prefixed names are reserved for IANA standard service names (as per RFC-6335 and http://www.iana.org/assignments/service-names). Non-standard protocols should use prefixed names such as mycompany.com/my-custom-protocol.
description: The application protocol for this port. This field follows standard Kubernetes label syntax. Un-prefixed names are reserved for IANA standard service names (as per RFC-6335 and https://www.iana.org/assignments/service-names). Non-standard protocols should use prefixed names such as mycompany.com/my-custom-protocol.
type: string
name:
description: The name of this port within the service. This must be a DNS_LABEL. All ports within a ServiceSpec must have unique names. When considering the endpoints for a Service, this must match the 'name' field in the EndpointPort. Optional if only one ServicePort is defined on this service.
@ -3891,24 +3912,18 @@ spec:
storage: false
subresources:
status: {}
status:
acceptedNames:
kind: ""
plural: ""
conditions: []
storedVersions: []
---
# Source: eck-operator-crds/templates/all-crds.yaml
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.8.0
controller-gen.kubebuilder.io/version: v0.9.1
creationTimestamp: null
labels:
app.kubernetes.io/instance: 'elastic-operator'
app.kubernetes.io/name: 'eck-operator-crds'
app.kubernetes.io/version: '2.2.0'
app.kubernetes.io/version: '2.4.0'
name: kibanas.kibana.k8s.elastic.co
spec:
group: kibana.k8s.elastic.co
@ -4024,7 +4039,7 @@ spec:
description: Spec is the specification of the service.
properties:
allocateLoadBalancerNodePorts:
description: allocateLoadBalancerNodePorts defines if NodePorts will be automatically allocated for services with type LoadBalancer. Default is "true". It may be set to "false" if the cluster load-balancer does not rely on NodePorts. If the caller requests specific NodePorts (by specifying a value), those requests will be respected, regardless of this field. This field may only be set for services with type LoadBalancer and will be cleared if the type is changed to any other type. This field is beta-level and is only honored by servers that enable the ServiceLBNodePortControl feature.
description: allocateLoadBalancerNodePorts defines if NodePorts will be automatically allocated for services with type LoadBalancer. Default is "true". It may be set to "false" if the cluster load-balancer does not rely on NodePorts. If the caller requests specific NodePorts (by specifying a value), those requests will be respected, regardless of this field. This field may only be set for services with type LoadBalancer and will be cleared if the type is changed to any other type.
type: boolean
clusterIP:
description: 'clusterIP is the IP address of the service and is usually assigned randomly. If an address is specified manually, is in-range (as per system configuration), and is not in use, it will be allocated to the service; otherwise creation of the service will fail. This field may not be changed through updates unless the type field is also being changed to ExternalName (which requires this field to be blank) or the type field is being changed from ExternalName (in which case this field may optionally be specified, as describe above). Valid values are "None", empty string (""), or a valid IP address. Setting this to "None" makes a "headless service" (no virtual IP), which is useful when direct endpoint connections are preferred and proxying is not required. Only applies to types ClusterIP, NodePort, and LoadBalancer. If this field is specified when creating a Service of type ExternalName, creation will fail. This field will be wiped when updating a Service to type ExternalName. More info: https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies'
@ -4067,7 +4082,7 @@ spec:
description: loadBalancerClass is the class of the load balancer implementation this Service belongs to. If specified, the value of this field must be a label-style identifier, with an optional prefix, e.g. "internal-vip" or "example.com/internal-vip". Unprefixed names are reserved for end-users. This field can only be set when the Service type is 'LoadBalancer'. If not set, the default load balancer implementation is used, today this is typically done through the cloud provider integration, but should apply for any default implementation. If set, it is assumed that a load balancer implementation is watching for Services with a matching class. Any default load balancer implementation (e.g. cloud providers) should ignore Services that set this field. This field can only be set when creating or updating a Service to type 'LoadBalancer'. Once set, it can not be changed. This field will be wiped when a service is updated to a non 'LoadBalancer' type.
type: string
loadBalancerIP:
description: 'Only applies to Service Type: LoadBalancer LoadBalancer will get created with the IP specified in this field. This feature depends on whether the underlying cloud-provider supports specifying the loadBalancerIP when a load balancer is created. This field will be ignored if the cloud-provider does not support the feature.'
description: 'Only applies to Service Type: LoadBalancer. This feature depends on whether the underlying cloud-provider supports specifying the loadBalancerIP when a load balancer is created. This field will be ignored if the cloud-provider does not support the feature. Deprecated: This field was under-specified and its meaning varies across implementations, and it cannot support dual-stack. As of Kubernetes v1.24, users are encouraged to use implementation-specific annotations when available. This field may be removed in a future API version.'
type: string
loadBalancerSourceRanges:
description: 'If specified and supported by the platform, this will restrict traffic through the cloud-provider load-balancer will be restricted to the specified client IPs. This field will be ignored if the cloud-provider does not support the feature." More info: https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/'
@ -4080,7 +4095,7 @@ spec:
description: ServicePort contains information on service's port.
properties:
appProtocol:
description: The application protocol for this port. This field follows standard Kubernetes label syntax. Un-prefixed names are reserved for IANA standard service names (as per RFC-6335 and http://www.iana.org/assignments/service-names). Non-standard protocols should use prefixed names such as mycompany.com/my-custom-protocol.
description: The application protocol for this port. This field follows standard Kubernetes label syntax. Un-prefixed names are reserved for IANA standard service names (as per RFC-6335 and https://www.iana.org/assignments/service-names). Non-standard protocols should use prefixed names such as mycompany.com/my-custom-protocol.
type: string
name:
description: The name of this port within the service. This must be a DNS_LABEL. All ports within a ServiceSpec must have unique names. When considering the endpoints for a Service, this must match the 'name' field in the EndpointPort. Optional if only one ServicePort is defined on this service.
@ -4229,6 +4244,10 @@ spec:
description: PodTemplate provides customisation options (labels, annotations, affinity rules, resource requests, and so on) for the Kibana pods
type: object
x-kubernetes-preserve-unknown-fields: true
revisionHistoryLimit:
description: RevisionHistoryLimit is the number of revisions to retain to allow rollback in the underlying Deployment.
format: int32
type: integer
secureSettings:
description: SecureSettings is a list of references to Kubernetes secrets containing sensitive configuration options for Kibana.
items:
@ -4395,7 +4414,7 @@ spec:
description: Spec is the specification of the service.
properties:
allocateLoadBalancerNodePorts:
description: allocateLoadBalancerNodePorts defines if NodePorts will be automatically allocated for services with type LoadBalancer. Default is "true". It may be set to "false" if the cluster load-balancer does not rely on NodePorts. If the caller requests specific NodePorts (by specifying a value), those requests will be respected, regardless of this field. This field may only be set for services with type LoadBalancer and will be cleared if the type is changed to any other type. This field is beta-level and is only honored by servers that enable the ServiceLBNodePortControl feature.
description: allocateLoadBalancerNodePorts defines if NodePorts will be automatically allocated for services with type LoadBalancer. Default is "true". It may be set to "false" if the cluster load-balancer does not rely on NodePorts. If the caller requests specific NodePorts (by specifying a value), those requests will be respected, regardless of this field. This field may only be set for services with type LoadBalancer and will be cleared if the type is changed to any other type.
type: boolean
clusterIP:
description: 'clusterIP is the IP address of the service and is usually assigned randomly. If an address is specified manually, is in-range (as per system configuration), and is not in use, it will be allocated to the service; otherwise creation of the service will fail. This field may not be changed through updates unless the type field is also being changed to ExternalName (which requires this field to be blank) or the type field is being changed from ExternalName (in which case this field may optionally be specified, as describe above). Valid values are "None", empty string (""), or a valid IP address. Setting this to "None" makes a "headless service" (no virtual IP), which is useful when direct endpoint connections are preferred and proxying is not required. Only applies to types ClusterIP, NodePort, and LoadBalancer. If this field is specified when creating a Service of type ExternalName, creation will fail. This field will be wiped when updating a Service to type ExternalName. More info: https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies'
@ -4438,7 +4457,7 @@ spec:
description: loadBalancerClass is the class of the load balancer implementation this Service belongs to. If specified, the value of this field must be a label-style identifier, with an optional prefix, e.g. "internal-vip" or "example.com/internal-vip". Unprefixed names are reserved for end-users. This field can only be set when the Service type is 'LoadBalancer'. If not set, the default load balancer implementation is used, today this is typically done through the cloud provider integration, but should apply for any default implementation. If set, it is assumed that a load balancer implementation is watching for Services with a matching class. Any default load balancer implementation (e.g. cloud providers) should ignore Services that set this field. This field can only be set when creating or updating a Service to type 'LoadBalancer'. Once set, it can not be changed. This field will be wiped when a service is updated to a non 'LoadBalancer' type.
type: string
loadBalancerIP:
description: 'Only applies to Service Type: LoadBalancer LoadBalancer will get created with the IP specified in this field. This feature depends on whether the underlying cloud-provider supports specifying the loadBalancerIP when a load balancer is created. This field will be ignored if the cloud-provider does not support the feature.'
description: 'Only applies to Service Type: LoadBalancer. This feature depends on whether the underlying cloud-provider supports specifying the loadBalancerIP when a load balancer is created. This field will be ignored if the cloud-provider does not support the feature. Deprecated: This field was under-specified and its meaning varies across implementations, and it cannot support dual-stack. As of Kubernetes v1.24, users are encouraged to use implementation-specific annotations when available. This field may be removed in a future API version.'
type: string
loadBalancerSourceRanges:
description: 'If specified and supported by the platform, this will restrict traffic through the cloud-provider load-balancer will be restricted to the specified client IPs. This field will be ignored if the cloud-provider does not support the feature." More info: https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/'
@ -4451,7 +4470,7 @@ spec:
description: ServicePort contains information on service's port.
properties:
appProtocol:
description: The application protocol for this port. This field follows standard Kubernetes label syntax. Un-prefixed names are reserved for IANA standard service names (as per RFC-6335 and http://www.iana.org/assignments/service-names). Non-standard protocols should use prefixed names such as mycompany.com/my-custom-protocol.
description: The application protocol for this port. This field follows standard Kubernetes label syntax. Un-prefixed names are reserved for IANA standard service names (as per RFC-6335 and https://www.iana.org/assignments/service-names). Non-standard protocols should use prefixed names such as mycompany.com/my-custom-protocol.
type: string
name:
description: The name of this port within the service. This must be a DNS_LABEL. All ports within a ServiceSpec must have unique names. When considering the endpoints for a Service, this must match the 'name' field in the EndpointPort. Optional if only one ServicePort is defined on this service.
@ -4606,10 +4625,4 @@ spec:
type: object
served: false
storage: false
status:
acceptedNames:
kind: ""
plural: ""
conditions: []
storedVersions: []

View File

@ -14,7 +14,7 @@ metadata:
namespace: elastic-system
labels:
control-plane: elastic-operator
app.kubernetes.io/version: "2.2.0"
app.kubernetes.io/version: "2.4.0"
---
# Source: eck-operator/templates/webhook.yaml
apiVersion: v1
@ -24,7 +24,7 @@ metadata:
namespace: elastic-system
labels:
control-plane: elastic-operator
app.kubernetes.io/version: "2.2.0"
app.kubernetes.io/version: "2.4.0"
---
# Source: eck-operator/templates/configmap.yaml
apiVersion: v1
@ -34,7 +34,7 @@ metadata:
namespace: elastic-system
labels:
control-plane: elastic-operator
app.kubernetes.io/version: "2.2.0"
app.kubernetes.io/version: "2.4.0"
data:
eck.yaml: |-
log-verbosity: 0
@ -54,6 +54,7 @@ data:
validate-storage-class: true
enable-webhook: true
webhook-name: elastic-webhook.k8s.elastic.co
enable-leader-election: true
---
# Source: eck-operator/templates/cluster-roles.yaml
apiVersion: rbac.authorization.k8s.io/v1
@ -62,7 +63,7 @@ metadata:
name: elastic-operator
labels:
control-plane: elastic-operator
app.kubernetes.io/version: "2.2.0"
app.kubernetes.io/version: "2.4.0"
rules:
- apiGroups:
- "authorization.k8s.io"
@ -70,6 +71,22 @@ rules:
- subjectaccessreviews
verbs:
- create
- apiGroups:
- coordination.k8s.io
resources:
- leases
verbs:
- create
- apiGroups:
- coordination.k8s.io
resources:
- leases
resourceNames:
- elastic-operator-leader
verbs:
- get
- watch
- update
- apiGroups:
- ""
resources:
@ -251,7 +268,7 @@ metadata:
rbac.authorization.k8s.io/aggregate-to-edit: "true"
rbac.authorization.k8s.io/aggregate-to-admin: "true"
control-plane: elastic-operator
app.kubernetes.io/version: "2.2.0"
app.kubernetes.io/version: "2.4.0"
rules:
- apiGroups: ["elasticsearch.k8s.elastic.co"]
resources: ["elasticsearches"]
@ -284,7 +301,7 @@ metadata:
rbac.authorization.k8s.io/aggregate-to-edit: "true"
rbac.authorization.k8s.io/aggregate-to-admin: "true"
control-plane: elastic-operator
app.kubernetes.io/version: "2.2.0"
app.kubernetes.io/version: "2.4.0"
rules:
- apiGroups: ["elasticsearch.k8s.elastic.co"]
resources: ["elasticsearches"]
@ -315,7 +332,7 @@ metadata:
name: elastic-operator
labels:
control-plane: elastic-operator
app.kubernetes.io/version: "2.2.0"
app.kubernetes.io/version: "2.4.0"
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
@ -333,7 +350,7 @@ metadata:
namespace: elastic-system
labels:
control-plane: elastic-operator
app.kubernetes.io/version: "2.2.0"
app.kubernetes.io/version: "2.4.0"
spec:
ports:
- name: https
@ -350,7 +367,7 @@ metadata:
namespace: elastic-system
labels:
control-plane: elastic-operator
app.kubernetes.io/version: "2.2.0"
app.kubernetes.io/version: "2.4.0"
spec:
selector:
matchLabels:
@ -363,7 +380,7 @@ spec:
# Rename the fields "error" to "error.message" and "source" to "event.source"
# This is to avoid a conflict with the ECS "error" and "source" documents.
"co.elastic.logs/raw": "[{\"type\":\"container\",\"json.keys_under_root\":true,\"paths\":[\"/var/log/containers/*${data.kubernetes.container.id}.log\"],\"processors\":[{\"convert\":{\"mode\":\"rename\",\"ignore_missing\":true,\"fields\":[{\"from\":\"error\",\"to\":\"_error\"}]}},{\"convert\":{\"mode\":\"rename\",\"ignore_missing\":true,\"fields\":[{\"from\":\"_error\",\"to\":\"error.message\"}]}},{\"convert\":{\"mode\":\"rename\",\"ignore_missing\":true,\"fields\":[{\"from\":\"source\",\"to\":\"_source\"}]}},{\"convert\":{\"mode\":\"rename\",\"ignore_missing\":true,\"fields\":[{\"from\":\"_source\",\"to\":\"event.source\"}]}}]}]"
"checksum/config": 302bbb79b6fb0ffa41fcc06e164252c7dad887cf4d8149c8e1e5203c7651277e
"checksum/config": a99a5f63f628a1ca8df440c12506cdfbf17827a1175dc5765b05f22f92b12b95
labels:
control-plane: elastic-operator
spec:
@ -372,7 +389,7 @@ spec:
securityContext:
runAsNonRoot: true
containers:
- image: "docker.elastic.co/eck/eck-operator:2.2.0"
- image: "docker.elastic.co/eck/eck-operator:2.4.0"
imagePullPolicy: IfNotPresent
name: manager
args:
@ -423,7 +440,7 @@ metadata:
name: elastic-webhook.k8s.elastic.co
labels:
control-plane: elastic-operator
app.kubernetes.io/version: "2.2.0"
app.kubernetes.io/version: "2.4.0"
webhooks:
- clientConfig:
caBundle: Cg==

View File

@ -79,7 +79,6 @@ metadata:
namespace: etherpad
annotations:
kubernetes.io/ingress.class: traefik
cert-manager.io/cluster-issuer: default
traefik.ingress.kubernetes.io/router.entrypoints: websecure
traefik.ingress.kubernetes.io/router.tls: "true"
external-dns.alpha.kubernetes.io/target: traefik.k-space.ee
@ -97,8 +96,7 @@ spec:
number: 9001
tls:
- hosts:
- pad.k-space.ee
secretName: pad-tls
- "*.k-space.ee"
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy

View File

@ -2,9 +2,9 @@ Before applying replace the secret with the actual one.
For debugging add `- --log-level=debug`:
```
kubectl apply -n external-dns -f external-dns.yml
wget https://raw.githubusercontent.com/kubernetes-sigs/external-dns/master/docs/contributing/crd-source/crd-manifest.yaml -O crd.yml
kubectl apply -n external-dns -f application.yml -f crd.yml
```
Insert TSIG secret:

View File

@ -24,6 +24,20 @@ rules:
- get
- list
- watch
- apiGroups:
- externaldns.k8s.io
resources:
- dnsendpoints
verbs:
- get
- watch
- list
- apiGroups:
- externaldns.k8s.io
resources:
- dnsendpoints/status
verbs:
- update
---
apiVersion: v1
kind: ServiceAccount
@ -63,7 +77,7 @@ spec:
serviceAccountName: external-dns
containers:
- name: external-dns
image: k8s.gcr.io/external-dns/external-dns:v0.10.2
image: k8s.gcr.io/external-dns/external-dns:v0.13.1
envFrom:
- secretRef:
name: tsig-secret

19
grafana/README.md Normal file
View File

@ -0,0 +1,19 @@
# Grafana
```
kubectl create namespace grafana
kubectl apply -n grafana -f application.yml
```
## OIDC secret
See Authelia README on provisioning and updating OIDC secrets for Grafana
## Grafana post deployment steps
* Configure Prometheus datasource with URL set to
`http://prometheus-operated.prometheus-operator.svc.cluster.local:9090`
* Configure Elasticsearch datasource with URL set to
`http://elasticsearch.elastic-system.svc.cluster.local`,
Time field name set to `timestamp` and
ElasticSearch version set to `7.10+`

135
grafana/application.yml Normal file
View File

@ -0,0 +1,135 @@
---
apiVersion: v1
kind: ConfigMap
metadata:
name: grafana-config
data:
grafana.ini: |
[log]
level = warn
[server]
domain = grafana.k-space.ee
root_url = https://%(domain)s/
[auth.generic_oauth]
name = OAuth
icon = signin
enabled = true
client_id = grafana
scopes = openid profile email groups
empty_scopes = false
auth_url = https://auth.k-space.ee/api/oidc/authorize
token_url = https://auth.k-space.ee/api/oidc/token
api_url = https://auth.k-space.ee/api/oidc/userinfo
allow_sign_up = true
role_attribute_path = contains(groups[*], 'Grafana Admins') && 'Admin' || 'Viewer'
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
labels:
app: grafana
name: grafana
spec:
revisionHistoryLimit: 0
serviceName: grafana
selector:
matchLabels:
app: grafana
template:
metadata:
labels:
app: grafana
spec:
securityContext:
fsGroup: 472
containers:
- name: grafana
image: grafana/grafana:8.5.0
securityContext:
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 472
envFrom:
- secretRef:
name: oidc-secret
ports:
- containerPort: 3000
name: http-grafana
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /robots.txt
port: 3000
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 30
successThreshold: 1
timeoutSeconds: 2
livenessProbe:
failureThreshold: 3
initialDelaySeconds: 30
periodSeconds: 10
successThreshold: 1
tcpSocket:
port: 3000
timeoutSeconds: 1
resources:
requests:
cpu: 250m
memory: 750Mi
volumeMounts:
- mountPath: /var/lib/grafana
name: grafana-data
- mountPath: /etc/grafana
name: grafana-config
volumes:
- name: grafana-config
configMap:
name: grafana-config
volumeClaimTemplates:
- metadata:
name: grafana-data
spec:
storageClassName: longhorn
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
apiVersion: v1
kind: Service
metadata:
name: grafana
spec:
ports:
- port: 80
protocol: TCP
targetPort: http-grafana
selector:
app: grafana
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: grafana
annotations:
traefik.ingress.kubernetes.io/router.entrypoints: websecure
traefik.ingress.kubernetes.io/router.tls: "true"
external-dns.alpha.kubernetes.io/target: traefik.k-space.ee
spec:
rules:
- host: grafana.k-space.ee
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: grafana
port:
number: 80
tls:
- hosts:
- "*.k-space.ee"

View File

@ -35,7 +35,7 @@ data:
TRIVY_ADAPTER_URL: "http://harbor-trivy:8080"
REGISTRY_STORAGE_PROVIDER_NAME: "filesystem"
WITH_CHARTMUSEUM: "false"
LOG_LEVEL: "info"
LOG_LEVEL: "warning"
CONFIG_PATH: "/etc/core/app.conf"
CHART_CACHE_DRIVER: "redis"
_REDIS_URL_CORE: "redis://harbor-redis:6379/0?idle_timeout_seconds=30"
@ -397,7 +397,6 @@ spec:
containers:
- name: core
image: goharbor/harbor-core:v2.4.2
imagePullPolicy: IfNotPresent
startupProbe:
httpGet:
path: /api/v2.0/ping
@ -406,16 +405,9 @@ spec:
failureThreshold: 360
initialDelaySeconds: 10
periodSeconds: 10
livenessProbe:
httpGet:
path: /api/v2.0/ping
scheme: HTTP
port: 8080
failureThreshold: 2
periodSeconds: 10
readinessProbe:
httpGet:
path: /api/v2.0/ping
path: /api/v2.0/projects
scheme: HTTP
port: 8080
failureThreshold: 2
@ -472,6 +464,13 @@ spec:
secret:
- name: psc
emptyDir: {}
nodeSelector:
dedicated: storage
tolerations:
- key: dedicated
operator: Equal
value: storage
effect: NoSchedule
---
# Source: harbor/templates/jobservice/jobservice-dpl.yaml
apiVersion: apps/v1
@ -502,14 +501,6 @@ spec:
containers:
- name: jobservice
image: goharbor/harbor-jobservice:v2.4.2
imagePullPolicy: IfNotPresent
livenessProbe:
httpGet:
path: /api/v1/stats
scheme: HTTP
port: 8080
initialDelaySeconds: 300
periodSeconds: 10
readinessProbe:
httpGet:
path: /api/v1/stats
@ -544,6 +535,13 @@ spec:
- name: job-logs
persistentVolumeClaim:
claimName: harbor-jobservice
nodeSelector:
dedicated: storage
tolerations:
- key: dedicated
operator: Equal
value: storage
effect: NoSchedule
---
# Source: harbor/templates/portal/deployment.yaml
apiVersion: apps/v1
@ -574,14 +572,6 @@ spec:
containers:
- name: portal
image: goharbor/harbor-portal:v2.4.2
imagePullPolicy: IfNotPresent
livenessProbe:
httpGet:
path: /
scheme: HTTP
port: 8080
initialDelaySeconds: 300
periodSeconds: 10
readinessProbe:
httpGet:
path: /
@ -599,6 +589,13 @@ spec:
- name: portal-config
configMap:
name: "harbor-portal"
nodeSelector:
dedicated: storage
tolerations:
- key: dedicated
operator: Equal
value: storage
effect: NoSchedule
---
# Source: harbor/templates/registry/registry-dpl.yaml
apiVersion: apps/v1
@ -629,14 +626,6 @@ spec:
containers:
- name: registry
image: goharbor/registry-photon:v2.4.2
imagePullPolicy: IfNotPresent
livenessProbe:
httpGet:
path: /
scheme: HTTP
port: 5000
initialDelaySeconds: 300
periodSeconds: 10
readinessProbe:
httpGet:
path: /
@ -664,14 +653,6 @@ spec:
subPath: config.yml
- name: registryctl
image: goharbor/harbor-registryctl:v2.4.2
imagePullPolicy: IfNotPresent
livenessProbe:
httpGet:
path: /api/health
scheme: HTTP
port: 8080
initialDelaySeconds: 300
periodSeconds: 10
readinessProbe:
httpGet:
path: /api/health
@ -722,6 +703,13 @@ spec:
- name: registry-data
persistentVolumeClaim:
claimName: harbor-registry
nodeSelector:
dedicated: storage
tolerations:
- key: dedicated
operator: Equal
value: storage
effect: NoSchedule
---
# Source: harbor/templates/database/database-ss.yaml
apiVersion: apps/v1
@ -756,7 +744,6 @@ spec:
# we may remove it after several releases
- name: "data-migrator"
image: goharbor/harbor-db:v2.4.2
imagePullPolicy: IfNotPresent
command: ["/bin/sh"]
args: ["-c", "[ -e /var/lib/postgresql/data/postgresql.conf ] && [ ! -d /var/lib/postgresql/data/pgdata ] && mkdir -m 0700 /var/lib/postgresql/data/pgdata && mv /var/lib/postgresql/data/* /var/lib/postgresql/data/pgdata/ || true"]
volumeMounts:
@ -769,7 +756,6 @@ spec:
# as "fsGroup" applied before the init container running, the container has enough permission to execute the command
- name: "data-permissions-ensurer"
image: goharbor/harbor-db:v2.4.2
imagePullPolicy: IfNotPresent
command: ["/bin/sh"]
args: ["-c", "chmod -R 700 /var/lib/postgresql/data/pgdata || true"]
volumeMounts:
@ -779,13 +765,6 @@ spec:
containers:
- name: database
image: goharbor/harbor-db:v2.4.2
imagePullPolicy: IfNotPresent
livenessProbe:
exec:
command:
- /docker-healthcheck.sh
initialDelaySeconds: 300
periodSeconds: 10
readinessProbe:
exec:
command:
@ -811,6 +790,13 @@ spec:
emptyDir:
medium: Memory
sizeLimit: 512Mi
nodeSelector:
dedicated: storage
tolerations:
- key: dedicated
operator: Equal
value: storage
effect: NoSchedule
volumeClaimTemplates:
- metadata:
name: "database-data"
@ -853,12 +839,6 @@ spec:
containers:
- name: redis
image: goharbor/redis-photon:v2.4.2
imagePullPolicy: IfNotPresent
livenessProbe:
tcpSocket:
port: 6379
initialDelaySeconds: 300
periodSeconds: 10
readinessProbe:
tcpSocket:
port: 6379
@ -868,6 +848,13 @@ spec:
- name: data
mountPath: /var/lib/redis
subPath:
nodeSelector:
dedicated: storage
tolerations:
- key: dedicated
operator: Equal
value: storage
effect: NoSchedule
volumeClaimTemplates:
- metadata:
name: data
@ -970,15 +957,6 @@ spec:
mountPath: /home/scanner/.cache
subPath:
readOnly: false
livenessProbe:
httpGet:
scheme: HTTP
path: /probe/healthy
port: api-server
initialDelaySeconds: 5
periodSeconds: 10
successThreshold: 1
failureThreshold: 10
readinessProbe:
httpGet:
scheme: HTTP
@ -995,6 +973,13 @@ spec:
requests:
cpu: 200m
memory: 512Mi
nodeSelector:
dedicated: storage
tolerations:
- key: dedicated
operator: Equal
value: storage
effect: NoSchedule
volumeClaimTemplates:
- metadata:
name: data
@ -1016,7 +1001,6 @@ metadata:
labels:
app: harbor
annotations:
cert-manager.io/cluster-issuer: default
external-dns.alpha.kubernetes.io/target: traefik.k-space.ee
ingress.kubernetes.io/proxy-body-size: "0"
ingress.kubernetes.io/ssl-redirect: "true"
@ -1027,9 +1011,8 @@ metadata:
traefik.ingress.kubernetes.io/router.tls: "true"
spec:
tls:
- secretName: harbor-tls
hosts:
- harbor.k-space.ee
- hosts:
- "*.k-space.ee"
rules:
- http:
paths:

View File

@ -0,0 +1,165 @@
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: descheduler
namespace: kube-system
labels:
app.kubernetes.io/name: descheduler
---
apiVersion: v1
kind: ConfigMap
metadata:
name: descheduler
namespace: kube-system
labels:
app.kubernetes.io/name: descheduler
data:
policy.yaml: |
apiVersion: "descheduler/v1alpha1"
kind: "DeschedulerPolicy"
strategies:
LowNodeUtilization:
enabled: true
params:
nodeResourceUtilizationThresholds:
targetThresholds:
cpu: 50
memory: 50
pods: 50
thresholds:
cpu: 20
memory: 20
pods: 20
RemoveDuplicates:
enabled: true
RemovePodsHavingTooManyRestarts:
enabled: true
params:
podsHavingTooManyRestarts:
includingInitContainers: true
podRestartThreshold: 100
RemovePodsViolatingInterPodAntiAffinity:
enabled: true
RemovePodsViolatingNodeAffinity:
enabled: true
params:
nodeAffinityType:
- requiredDuringSchedulingIgnoredDuringExecution
RemovePodsViolatingNodeTaints:
enabled: true
RemovePodsViolatingTopologySpreadConstraint:
enabled: true
params:
includeSoftConstraints: false
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: descheduler
labels:
app.kubernetes.io/name: descheduler
rules:
- apiGroups: ["events.k8s.io"]
resources: ["events"]
verbs: ["create", "update"]
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get", "watch", "list"]
- apiGroups: [""]
resources: ["namespaces"]
verbs: ["get", "watch", "list"]
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "watch", "list", "delete"]
- apiGroups: [""]
resources: ["pods/eviction"]
verbs: ["create"]
- apiGroups: ["scheduling.k8s.io"]
resources: ["priorityclasses"]
verbs: ["get", "watch", "list"]
- apiGroups: ["coordination.k8s.io"]
resources: ["leases"]
verbs: ["create", "update"]
- apiGroups: ["coordination.k8s.io"]
resources: ["leases"]
resourceNames: ["descheduler"]
verbs: ["get", "patch", "delete"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: descheduler
labels:
app.kubernetes.io/name: descheduler
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: descheduler
subjects:
- kind: ServiceAccount
name: descheduler
namespace: kube-system
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: descheduler
namespace: kube-system
labels:
app.kubernetes.io/name: descheduler
spec:
replicas: 2
selector:
matchLabels: &selectorLabels
app.kubernetes.io/name: descheduler
template:
metadata:
labels: *selectorLabels
spec:
priorityClassName: system-cluster-critical
serviceAccountName: descheduler
containers:
- name: descheduler
image: "k8s.gcr.io/descheduler/descheduler:v0.25.1"
imagePullPolicy: IfNotPresent
command:
- "/bin/descheduler"
args:
- "--policy-config-file"
- "/policy-dir/policy.yaml"
- "--descheduling-interval"
- 5m
- "--v"
- "3"
- --leader-elect=true
ports:
- containerPort: 10258
protocol: TCP
livenessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10258
scheme: HTTPS
initialDelaySeconds: 3
periodSeconds: 10
resources:
requests:
cpu: 500m
memory: 256Mi
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
privileged: false
readOnlyRootFilesystem: true
runAsNonRoot: true
volumeMounts:
- mountPath: /policy-dir
name: policy-volume
volumes:
- name: policy-volume
configMap:
name: descheduler

View File

@ -159,7 +159,9 @@ spec:
spec:
automountServiceAccountToken: true
containers:
- image: registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.6.0
- image: registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.7.0
args:
- --metric-labels-allowlist=pods=[*]
livenessProbe:
httpGet:
path: /healthz
@ -219,3 +221,260 @@ spec:
selector:
matchLabels:
app.kubernetes.io/name: kube-state-metrics
---
apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
name: kube-state-metrics
spec:
groups:
- name: kube-state-metrics
rules:
- alert: KubernetesNodeReady
expr: kube_node_status_condition{condition="Ready",status="true"} == 0
for: 10m
labels:
severity: critical
annotations:
summary: Kubernetes Node ready (instance {{ $labels.instance }})
description: "Node {{ $labels.node }} has been unready for a long time\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- alert: KubernetesMemoryPressure
expr: kube_node_status_condition{condition="MemoryPressure",status="true"} == 1
for: 2m
labels:
severity: critical
annotations:
summary: Kubernetes memory pressure (instance {{ $labels.instance }})
description: "{{ $labels.node }} has MemoryPressure condition\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- alert: KubernetesDiskPressure
expr: kube_node_status_condition{condition="DiskPressure",status="true"} == 1
for: 2m
labels:
severity: critical
annotations:
summary: Kubernetes disk pressure (instance {{ $labels.instance }})
description: "{{ $labels.node }} has DiskPressure condition\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- alert: KubernetesOutOfDisk
expr: kube_node_status_condition{condition="OutOfDisk",status="true"} == 1
for: 2m
labels:
severity: critical
annotations:
summary: Kubernetes out of disk (instance {{ $labels.instance }})
description: "{{ $labels.node }} has OutOfDisk condition\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- alert: KubernetesOutOfCapacity
expr: sum by (node) ((kube_pod_status_phase{phase="Running"} == 1) + on(uid) group_left(node) (0 * kube_pod_info{pod_template_hash=""})) / sum by (node) (kube_node_status_allocatable{resource="pods"}) * 100 > 90
for: 2m
labels:
severity: warning
annotations:
summary: Kubernetes out of capacity (instance {{ $labels.instance }})
description: "{{ $labels.node }} is out of capacity\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- alert: KubernetesContainerOomKiller
expr: (kube_pod_container_status_restarts_total - kube_pod_container_status_restarts_total offset 10m >= 1) and ignoring (reason) min_over_time(kube_pod_container_status_last_terminated_reason{reason="OOMKilled"}[10m]) == 1
for: 0m
labels:
severity: warning
annotations:
summary: Kubernetes container oom killer (instance {{ $labels.instance }})
description: "Container {{ $labels.container }} in pod {{ $labels.namespace }}/{{ $labels.pod }} has been OOMKilled {{ $value }} times in the last 10 minutes.\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- alert: KubernetesJobFailed
expr: kube_job_status_failed > 0
for: 0m
labels:
severity: warning
annotations:
summary: Kubernetes Job failed (instance {{ $labels.instance }})
description: "Job {{$labels.namespace}}/{{$labels.exported_job}} failed to complete\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- alert: KubernetesCronjobSuspended
expr: kube_cronjob_spec_suspend != 0
for: 0m
labels:
severity: warning
annotations:
summary: Kubernetes CronJob suspended (instance {{ $labels.instance }})
description: "CronJob {{ $labels.namespace }}/{{ $labels.cronjob }} is suspended\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- alert: KubernetesPersistentvolumeclaimPending
expr: kube_persistentvolumeclaim_status_phase{phase="Pending"} == 1
for: 2m
labels:
severity: warning
annotations:
summary: Kubernetes PersistentVolumeClaim pending (instance {{ $labels.instance }})
description: "PersistentVolumeClaim {{ $labels.namespace }}/{{ $labels.persistentvolumeclaim }} is pending\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- alert: KubernetesVolumeOutOfDiskSpace
expr: kubelet_volume_stats_available_bytes / kubelet_volume_stats_capacity_bytes * 100 < 10
for: 2m
labels:
severity: warning
annotations:
summary: Kubernetes Volume out of disk space (instance {{ $labels.instance }})
description: "Volume is almost full (< 10% left)\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- alert: KubernetesPersistentvolumeError
expr: kube_persistentvolume_status_phase{phase=~"Failed|Pending", job="kube-state-metrics"} > 0
for: 0m
labels:
severity: critical
annotations:
summary: Kubernetes PersistentVolume error (instance {{ $labels.instance }})
description: "Persistent volume is in bad state\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- alert: KubernetesStatefulsetDown
expr: (kube_statefulset_status_replicas_ready / kube_statefulset_status_replicas_current) != 1
for: 1m
labels:
severity: critical
annotations:
summary: Kubernetes StatefulSet down (instance {{ $labels.instance }})
description: "A StatefulSet went down\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- alert: KubernetesHpaScalingAbility
expr: kube_horizontalpodautoscaler_status_condition{status="false", condition="AbleToScale"} == 1
for: 2m
labels:
severity: warning
annotations:
summary: Kubernetes HPA scaling ability (instance {{ $labels.instance }})
description: "Pod is unable to scale\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- alert: KubernetesHpaMetricAvailability
expr: kube_horizontalpodautoscaler_status_condition{status="false", condition="ScalingActive"} == 1
for: 0m
labels:
severity: warning
annotations:
summary: Kubernetes HPA metric availability (instance {{ $labels.instance }})
description: "HPA is not able to collect metrics\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- alert: KubernetesHpaScaleCapability
expr: kube_horizontalpodautoscaler_status_desired_replicas >= kube_horizontalpodautoscaler_spec_max_replicas
for: 2m
labels:
severity: info
annotations:
summary: Kubernetes HPA scale capability (instance {{ $labels.instance }})
description: "The maximum number of desired Pods has been hit\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- alert: KubernetesPodNotHealthy
expr: min_over_time(sum by (namespace, pod) (kube_pod_status_phase{phase=~"Pending|Unknown|Failed"})[15m:1m]) > 0
for: 0m
labels:
severity: critical
annotations:
summary: Kubernetes Pod not healthy (instance {{ $labels.instance }})
description: "Pod has been in a non-ready state for longer than 15 minutes.\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- alert: KubernetesPodCrashLooping
expr: increase(kube_pod_container_status_restarts_total[1m]) > 3
for: 2m
labels:
severity: warning
annotations:
summary: Kubernetes pod crash looping (instance {{ $labels.instance }})
description: "Pod {{ $labels.pod }} is crash looping\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- alert: KubernetesReplicassetMismatch
expr: kube_replicaset_spec_replicas != kube_replicaset_status_ready_replicas
for: 10m
labels:
severity: warning
annotations:
summary: Kubernetes ReplicasSet mismatch (instance {{ $labels.instance }})
description: "Deployment Replicas mismatch\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- alert: KubernetesDeploymentReplicasMismatch
expr: kube_deployment_spec_replicas != kube_deployment_status_replicas_available
for: 10m
labels:
severity: warning
annotations:
summary: Kubernetes Deployment replicas mismatch (instance {{ $labels.instance }})
description: "Deployment Replicas mismatch\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- alert: KubernetesStatefulsetReplicasMismatch
expr: kube_statefulset_status_replicas_ready != kube_statefulset_status_replicas
for: 10m
labels:
severity: warning
annotations:
summary: Kubernetes StatefulSet replicas mismatch (instance {{ $labels.instance }})
description: "A StatefulSet does not match the expected number of replicas.\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- alert: KubernetesDeploymentGenerationMismatch
expr: kube_deployment_status_observed_generation != kube_deployment_metadata_generation
for: 10m
labels:
severity: critical
annotations:
summary: Kubernetes Deployment generation mismatch (instance {{ $labels.instance }})
description: "A Deployment has failed but has not been rolled back.\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- alert: KubernetesStatefulsetGenerationMismatch
expr: kube_statefulset_status_observed_generation != kube_statefulset_metadata_generation
for: 10m
labels:
severity: critical
annotations:
summary: Kubernetes StatefulSet generation mismatch (instance {{ $labels.instance }})
description: "A StatefulSet has failed but has not been rolled back.\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- alert: KubernetesStatefulsetUpdateNotRolledOut
expr: max without (revision) (kube_statefulset_status_current_revision unless kube_statefulset_status_update_revision) * (kube_statefulset_replicas != kube_statefulset_status_replicas_updated)
for: 10m
labels:
severity: warning
annotations:
summary: Kubernetes StatefulSet update not rolled out (instance {{ $labels.instance }})
description: "StatefulSet update has not been rolled out.\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- alert: KubernetesDaemonsetRolloutStuck
expr: kube_daemonset_status_number_ready / kube_daemonset_status_desired_number_scheduled * 100 < 100 or kube_daemonset_status_desired_number_scheduled - kube_daemonset_status_current_number_scheduled > 0
for: 10m
labels:
severity: warning
annotations:
summary: Kubernetes DaemonSet rollout stuck (instance {{ $labels.instance }})
description: "Some Pods of DaemonSet are not scheduled or not ready\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- alert: KubernetesDaemonsetMisscheduled
expr: sum by (namespace, daemonset) (kube_daemonset_status_number_misscheduled) > 0
for: 1m
labels:
severity: critical
annotations:
summary: Kubernetes DaemonSet misscheduled (instance {{ $labels.instance }})
description: "Some DaemonSet Pods are running where they are not supposed to run\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- alert: KubernetesJobSlowCompletion
expr: kube_job_spec_completions - kube_job_status_succeeded > 0
for: 12h
labels:
severity: critical
annotations:
summary: Kubernetes job slow completion (instance {{ $labels.instance }})
description: "Kubernetes Job {{ $labels.namespace }}/{{ $labels.job_name }} did not complete in time.\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- alert: KubernetesApiServerErrors
expr: sum(rate(apiserver_request_total{job="apiserver",code=~"^(?:5..)$"}[1m])) / sum(rate(apiserver_request_total{job="apiserver"}[1m])) * 100 > 3
for: 2m
labels:
severity: critical
annotations:
summary: Kubernetes API server errors (instance {{ $labels.instance }})
description: "Kubernetes API server is experiencing high error rate\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- alert: KubernetesApiClientErrors
expr: (sum(rate(rest_client_requests_total{code=~"(4|5).."}[1m])) by (instance, job) / sum(rate(rest_client_requests_total[1m])) by (instance, job)) * 100 > 1
for: 2m
labels:
severity: critical
annotations:
summary: Kubernetes API client errors (instance {{ $labels.instance }})
description: "Kubernetes API client is experiencing high error rate\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- alert: KubernetesClientCertificateExpiresNextWeek
expr: apiserver_client_certificate_expiration_seconds_count{job="apiserver"} > 0 and histogram_quantile(0.01, sum by (job, le) (rate(apiserver_client_certificate_expiration_seconds_bucket{job="apiserver"}[5m]))) < 7*24*60*60
for: 0m
labels:
severity: warning
annotations:
summary: Kubernetes client certificate expires next week (instance {{ $labels.instance }})
description: "A client certificate used to authenticate to the apiserver is expiring next week.\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- alert: KubernetesClientCertificateExpiresSoon
expr: apiserver_client_certificate_expiration_seconds_count{job="apiserver"} > 0 and histogram_quantile(0.01, sum by (job, le) (rate(apiserver_client_certificate_expiration_seconds_bucket{job="apiserver"}[5m]))) < 24*60*60
for: 0m
labels:
severity: critical
annotations:
summary: Kubernetes client certificate expires soon (instance {{ $labels.instance }})
description: "A client certificate used to authenticate to the apiserver is expiring in less than 24.0 hours.\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- alert: KubernetesApiServerLatency
expr: histogram_quantile(0.99, sum(rate(apiserver_request_latencies_bucket{subresource!="log",verb!~"^(?:CONNECT|WATCHLIST|WATCH|PROXY)$"} [10m])) WITHOUT (instance, resource)) / 1e+06 > 1
for: 2m
labels:
severity: warning
annotations:
summary: Kubernetes API server latency (instance {{ $labels.instance }})
description: "Kubernetes API server has a 99th percentile latency of {{ $value }} seconds for {{ $labels.verb }} {{ $labels.resource }}.\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"

View File

@ -0,0 +1,197 @@
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: metrics-server
name: metrics-server
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
k8s-app: metrics-server
rbac.authorization.k8s.io/aggregate-to-admin: "true"
rbac.authorization.k8s.io/aggregate-to-edit: "true"
rbac.authorization.k8s.io/aggregate-to-view: "true"
name: system:aggregated-metrics-reader
rules:
- apiGroups:
- metrics.k8s.io
resources:
- pods
- nodes
verbs:
- get
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
k8s-app: metrics-server
name: system:metrics-server
rules:
- apiGroups:
- ""
resources:
- nodes/metrics
verbs:
- get
- apiGroups:
- ""
resources:
- pods
- nodes
verbs:
- get
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
labels:
k8s-app: metrics-server
name: metrics-server-auth-reader
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: extension-apiserver-authentication-reader
subjects:
- kind: ServiceAccount
name: metrics-server
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
k8s-app: metrics-server
name: metrics-server:system:auth-delegator
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:auth-delegator
subjects:
- kind: ServiceAccount
name: metrics-server
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
k8s-app: metrics-server
name: system:metrics-server
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:metrics-server
subjects:
- kind: ServiceAccount
name: metrics-server
namespace: kube-system
---
apiVersion: v1
kind: Service
metadata:
labels:
k8s-app: metrics-server
name: metrics-server
namespace: kube-system
spec:
ports:
- name: https
port: 443
protocol: TCP
targetPort: https
selector:
k8s-app: metrics-server
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
k8s-app: metrics-server
name: metrics-server
namespace: kube-system
spec:
selector:
matchLabels:
k8s-app: metrics-server
strategy:
rollingUpdate:
maxUnavailable: 0
template:
metadata:
labels:
k8s-app: metrics-server
spec:
containers:
- args:
- --cert-dir=/tmp
- --secure-port=4443
- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
- --kubelet-use-node-status-port
- --kubelet-insecure-tls
- --metric-resolution=15s
image: k8s.gcr.io/metrics-server/metrics-server:v0.6.1
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
httpGet:
path: /livez
port: https
scheme: HTTPS
periodSeconds: 10
name: metrics-server
ports:
- containerPort: 4443
name: https
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /readyz
port: https
scheme: HTTPS
initialDelaySeconds: 20
periodSeconds: 10
resources:
requests:
cpu: 100m
memory: 200Mi
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 1000
volumeMounts:
- mountPath: /tmp
name: tmp-dir
nodeSelector:
kubernetes.io/os: linux
priorityClassName: system-cluster-critical
serviceAccountName: metrics-server
volumes:
- emptyDir: {}
name: tmp-dir
---
apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:
labels:
k8s-app: metrics-server
name: v1beta1.metrics.k8s.io
spec:
group: metrics.k8s.io
groupPriorityMinimum: 100
insecureSkipTLSVerify: true
service:
name: metrics-server
namespace: kube-system
version: v1beta1
versionPriority: 100

View File

@ -269,7 +269,6 @@ metadata:
certManager: "true"
rewriteTarget: "true"
annotations:
cert-manager.io/cluster-issuer: default
external-dns.alpha.kubernetes.io/target: traefik.k-space.ee
kubernetes.io/ingress.class: traefik
traefik.ingress.kubernetes.io/router.entrypoints: websecure
@ -289,5 +288,4 @@ spec:
number: 80
tls:
- hosts:
- dashboard.k-space.ee
secretName: dashboard-tls
- "*.k-space.ee"

View File

@ -14,7 +14,7 @@ To deploy:
```
kubectl create namespace logging
kubectl apply -n logging -f mongodb-support.yml -f application.yml -f filebeat.yml -f networkpolicy-base.yml
kubectl apply -n logging -f zinc.yml -f application.yml -f filebeat.yml -f networkpolicy-base.yml
kubectl rollout restart -n logging daemonset.apps/filebeat
```

View File

@ -1,452 +0,0 @@
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: elasticsearch
labels:
app: elasticsearch
spec:
serviceName: elasticsearch
revisionHistoryLimit: 0
replicas: 1
selector:
matchLabels:
app: elasticsearch
template:
metadata:
labels:
app: elasticsearch
spec:
securityContext:
fsGroup: 1000
containers:
- name: elasticsearch
image: elasticsearch:7.17.3
securityContext:
runAsNonRoot: true
runAsUser: 1000
env:
- name: discovery.type
value: single-node
- name: xpack.security.enabled
value: "false"
ports:
- containerPort: 9200
readinessProbe:
httpGet:
path: /_cluster/health
port: 9200
initialDelaySeconds: 5
periodSeconds: 10
failureThreshold: 3
successThreshold: 1
timeoutSeconds: 5
resources:
limits:
memory: "2147483648"
volumeMounts:
- name: elasticsearch-data
mountPath: /usr/share/elasticsearch/data
- name: elasticsearch-tmp
mountPath: /tmp/
volumes:
- emptyDir: {}
name: elasticsearch-keystore
- emptyDir: {}
name: elasticsearch-tmp
- emptyDir: {}
name: elasticsearch-logs
volumeClaimTemplates:
- metadata:
name: elasticsearch-data
spec:
accessModes:
- "ReadWriteOnce"
resources:
requests:
storage: "10Gi"
storageClassName: longhorn
---
apiVersion: v1
kind: Service
metadata:
name: elasticsearch
labels:
app: elasticsearch
spec:
ports:
- name: api
port: 80
targetPort: 9200
selector:
app: elasticsearch
---
apiVersion: v1
kind: Service
metadata:
name: graylog-gelf-tcp
labels:
app: graylog
spec:
ports:
- name: graylog-gelf-tcp
port: 12201
protocol: TCP
targetPort: 12201
selector:
app: graylog
---
apiVersion: v1
kind: Service
metadata:
name: graylog-logstash
labels:
app: graylog
spec:
ports:
- name: graylog-logstash
port: 5044
protocol: TCP
selector:
app: graylog
---
apiVersion: v1
kind: Service
metadata:
name: graylog-syslog-tcp
labels:
app: graylog
annotations:
external-dns.alpha.kubernetes.io/hostname: syslog.k-space.ee
metallb.universe.tf/allow-shared-ip: syslog.k-space.ee
spec:
type: LoadBalancer
externalTrafficPolicy: Local
loadBalancerIP: 172.20.51.4
ports:
- name: graylog-syslog
port: 514
protocol: TCP
selector:
app: graylog
---
apiVersion: v1
kind: Service
metadata:
name: graylog-syslog-udp
labels:
app: graylog
annotations:
external-dns.alpha.kubernetes.io/hostname: syslog.k-space.ee
metallb.universe.tf/allow-shared-ip: syslog.k-space.ee
spec:
type: LoadBalancer
externalTrafficPolicy: Local
loadBalancerIP: 172.20.51.4
ports:
- name: graylog-syslog
port: 514
protocol: UDP
selector:
app: graylog
---
apiVersion: v1
kind: Service
metadata:
name: graylog
labels:
app: graylog
spec:
ports:
- name: graylog
port: 9000
protocol: TCP
selector:
app: graylog
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: graylog
labels:
app: graylog
annotations:
keel.sh/policy: minor
keel.sh/trigger: poll
keel.sh/pollSchedule: "@midnight"
spec:
serviceName: graylog
revisionHistoryLimit: 0
replicas: 1
selector:
matchLabels:
app: graylog
template:
metadata:
labels:
app: graylog
annotations:
prometheus.io/port: "9833"
prometheus.io/scrape: "true"
spec:
securityContext:
fsGroup: 1100
volumes:
- name: graylog-config
downwardAPI:
items:
- path: id
fieldRef:
fieldPath: metadata.name
containers:
- name: graylog
image: graylog/graylog:4.3
env:
- name: GRAYLOG_MONGODB_URI
valueFrom:
secretKeyRef:
name: mongodb-application-readwrite
key: connectionString.standard
- name: GRAYLOG_PROMETHEUS_EXPORTER_ENABLED
value: "true"
- name: GRAYLOG_PROMETHEUS_EXPORTER_BIND_ADDRESS
value: "0.0.0.0:9833"
- name: GRAYLOG_NODE_ID_FILE
value: /config/id
- name: GRAYLOG_HTTP_EXTERNAL_URI
value: "https://graylog.k-space.ee/"
- name: GRAYLOG_TRUSTED_PROXIES
value: "0.0.0.0/0"
- name: GRAYLOG_ELASTICSEARCH_HOSTS
value: "http://elasticsearch"
- name: GRAYLOG_MESSAGE_JOURNAL_ENABLED
value: "false"
- name: GRAYLOG_ROTATION_STRATEGY
value: "size"
- name: GRAYLOG_ELASTICSEARCH_MAX_SIZE_PER_INDEX
value: "268435456"
- name: GRAYLOG_ELASTICSEARCH_MAX_NUMBER_OF_INDICES
value: "16"
envFrom:
- secretRef:
name: graylog-secrets
securityContext:
runAsNonRoot: true
runAsUser: 1100
ports:
- containerPort: 9000
name: graylog
- containerPort: 9833
name: graylog-metrics
livenessProbe:
httpGet:
path: /api/system/lbstatus
port: 9000
initialDelaySeconds: 5
periodSeconds: 30
failureThreshold: 3
successThreshold: 1
timeoutSeconds: 5
readinessProbe:
httpGet:
path: /api/system/lbstatus
port: 9000
initialDelaySeconds: 5
periodSeconds: 10
failureThreshold: 3
successThreshold: 1
timeoutSeconds: 5
volumeMounts:
- name: graylog-config
mountPath: /config
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: graylog
annotations:
cert-manager.io/cluster-issuer: default
traefik.ingress.kubernetes.io/router.entrypoints: websecure
traefik.ingress.kubernetes.io/router.tls: "true"
external-dns.alpha.kubernetes.io/target: traefik.k-space.ee
traefik.ingress.kubernetes.io/router.middlewares: traefik-sso@kubernetescrd
spec:
rules:
- host: graylog.k-space.ee
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: graylog
port:
number: 9000
tls:
- hosts:
- graylog.k-space.ee
secretName: graylog-tls
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: graylog
spec:
podSelector:
matchLabels:
app: graylog
policyTypes:
- Ingress
- Egress
egress:
- to:
- podSelector:
matchLabels:
app: elasticsearch
ports:
- port: 9200
- to:
- podSelector:
matchLabels:
app: mongodb-svc
ports:
- port: 27017
ingress:
- from:
- ipBlock:
cidr: 172.23.0.0/16
- ipBlock:
cidr: 172.21.0.0/16
- ipBlock:
cidr: 100.102.0.0/16
ports:
- protocol: UDP
port: 514
- protocol: TCP
port: 514
- from:
- podSelector:
matchLabels:
app: filebeat
ports:
- protocol: TCP
port: 5044
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: monitoring
podSelector:
matchLabels:
app: prometheus
ports:
- port: 9833
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: traefik
podSelector:
matchLabels:
app.kubernetes.io/name: traefik
ports:
- protocol: TCP
port: 9000
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: elasticsearch
spec:
podSelector:
matchLabels:
app: elasticsearch
policyTypes:
- Ingress
- Egress
ingress:
- from:
- podSelector:
matchLabels:
app: graylog
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: monitoring
podSelector:
matchLabels:
app: grafana
egress:
- to:
- ipBlock:
# geoip.elastic.co updates
cidr: 0.0.0.0/0
ports:
- port: 443
---
apiVersion: mongodbcommunity.mongodb.com/v1
kind: MongoDBCommunity
metadata:
name: mongodb
spec:
members: 3
type: ReplicaSet
version: "5.0.9"
security:
authentication:
modes: ["SCRAM"]
users:
- name: readwrite
db: application
passwordSecretRef:
name: mongodb-application-readwrite-password
roles:
- name: readWrite
db: application
scramCredentialsSecretName: mongodb-application-readwrite
- name: readonly
db: application
passwordSecretRef:
name: mongodb-application-readonly-password
roles:
- name: readOnly
db: application
scramCredentialsSecretName: mongodb-application-readonly
statefulSet:
spec:
template:
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- mongodb-svc
topologyKey: kubernetes.io/hostname
nodeSelector:
dedicated: storage
tolerations:
- key: dedicated
operator: Equal
value: storage
effect: NoSchedule
volumeClaimTemplates:
- metadata:
name: logs-volume
spec:
storageClassName: local-path
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 512Mi
- metadata:
name: data-volume
spec:
storageClassName: local-path
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi

View File

@ -6,18 +6,15 @@ metadata:
namespace: logging
data:
filebeat.yml: |-
logging:
level: warning
setup:
ilm:
enabled: false
template:
name: filebeat
pattern: filebeat-*
http.enabled: true
filebeat.inputs:
- type: container
paths:
- /var/log/containers/*.log
processors:
- add_kubernetes_metadata:
in_cluster: true
host: ${NODE_NAME}
matchers:
- logs_path:
logs_path: "/var/log/containers/"
filebeat.autodiscover:
providers:
- type: kubernetes
@ -27,50 +24,24 @@ data:
type: container
paths:
- /var/log/containers/*${data.kubernetes.container.id}.log
processors:
- add_host_metadata:
- drop_fields:
fields:
- stream
ignore_missing: true
- rename:
fields:
- from: "kubernetes.node.name"
to: "source"
- from: "kubernetes.pod.name"
to: "pod"
- from: "stream"
to: "stream"
- from: "kubernetes.labels.app"
to: "app"
- from: "kubernetes.namespace"
to: "namespace"
ignore_missing: true
- drop_fields:
fields:
- agent
- container
- ecs
- host
- kubernetes
- log
- "@metadata"
ignore_missing: true
output.logstash:
hosts: ["graylog-logstash:5044"]
#output.console:
# pretty: true
output:
elasticsearch:
hosts:
- http://zinc:4080
path: "/es/"
index: "filebeat-%{+yyyy.MM.dd}"
username: "${ZINC_FIRST_ADMIN_USER}"
password: "${ZINC_FIRST_ADMIN_PASSWORD}"
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: filebeat
namespace: logging
spec:
updateStrategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 100%
maxUnavailable: 50%
selector:
matchLabels:
app: filebeat
@ -78,11 +49,13 @@ spec:
metadata:
labels:
app: filebeat
annotations:
co.elastic.logs/json.keys_under_root: "true"
spec:
serviceAccountName: filebeat
containers:
- name: filebeat
image: docker.elastic.co/beats/filebeat:7.17.6
image: docker.elastic.co/beats/filebeat:8.4.1
args:
- -c
- /etc/filebeat.yml
@ -94,6 +67,10 @@ spec:
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: ZINC_FIRST_ADMIN_USER
value: admin
- name: ZINC_FIRST_ADMIN_PASSWORD
value: salakala
ports:
- containerPort: 5066
resources:
@ -115,6 +92,14 @@ spec:
- name: varlog
mountPath: /var/log
readOnly: true
- name: exporter
image: sepa/beats-exporter
args:
- -p=5066
ports:
- containerPort: 8080
name: exporter
protocol: TCP
volumes:
- name: filebeat-config
configMap:
@ -168,11 +153,33 @@ spec:
policyTypes:
- Ingress
- Egress
ingress:
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: prometheus-operator
podSelector:
matchLabels:
app.kubernetes.io/name: prometheus
ports:
- protocol: TCP
port: 8080
egress:
- to:
- podSelector:
matchLabels:
app: graylog
app: zinc
ports:
- protocol: TCP
port: 5044
port: 4080
---
apiVersion: monitoring.coreos.com/v1
kind: PodMonitor
metadata:
name: filebeat
spec:
selector:
matchLabels:
app: filebeat
podMetricsEndpoints:
- port: exporter

122
logging/zinc.yml Normal file
View File

@ -0,0 +1,122 @@
apiVersion: v1
kind: Service
metadata:
name: zinc
spec:
clusterIP: None
selector:
app: zinc
ports:
- name: http
port: 4080
targetPort: 4080
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: zinc
spec:
serviceName: zinc
replicas: 1
selector:
matchLabels:
app: zinc
template:
metadata:
labels:
app: zinc
spec:
securityContext:
fsGroup: 2000
runAsUser: 10000
runAsGroup: 3000
runAsNonRoot: true
containers:
- name: zinc
image: public.ecr.aws/zinclabs/zinc:latest
env:
- name: GIN_MODE
value: release
- name: ZINC_FIRST_ADMIN_USER
value: admin
- name: ZINC_FIRST_ADMIN_PASSWORD
value: salakala
- name: ZINC_DATA_PATH
value: /data
imagePullPolicy: Always
resources:
limits:
cpu: "4"
memory: 4Gi
requests:
cpu: 32m
memory: 50Mi
ports:
- containerPort: 4080
name: http
volumeMounts:
- name: data
mountPath: /data
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes:
- ReadWriteOnce
storageClassName: longhorn
resources:
requests:
storage: 20Gi
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: zinc
annotations:
cert-manager.io/cluster-issuer: default
traefik.ingress.kubernetes.io/router.entrypoints: websecure
traefik.ingress.kubernetes.io/router.tls: "true"
external-dns.alpha.kubernetes.io/target: traefik.k-space.ee
traefik.ingress.kubernetes.io/router.middlewares: traefik-sso@kubernetescrd
spec:
rules:
- host: zinc.k-space.ee
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: zinc
port:
number: 4080
tls:
- hosts:
- zinc.k-space.ee
secretName: zinc-tls
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: zinc
spec:
podSelector:
matchLabels:
app: zinc
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
app: filebeat
ports:
- protocol: TCP
port: 4080
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: traefik
podSelector:
matchLabels:
app.kubernetes.io/name: traefik

491
logmower/application.yml Normal file
View File

@ -0,0 +1,491 @@
---
apiVersion: codemowers.io/v1alpha1
kind: GeneratedSecret
metadata:
name: logmower-readwrite-password
spec:
mapping:
- key: password
value: "%(password)s"
---
apiVersion: codemowers.io/v1alpha1
kind: GeneratedSecret
metadata:
name: logmower-readonly-password
spec:
mapping:
- key: password
value: "%(password)s"
---
apiVersion: mongodbcommunity.mongodb.com/v1
kind: MongoDBCommunity
metadata:
name: logmower-mongodb
spec:
additionalMongodConfig:
systemLog:
quiet: true
members: 2
arbiters: 1
type: ReplicaSet
version: "6.0.3"
security:
authentication:
modes: ["SCRAM"]
users:
- name: readwrite
db: application
passwordSecretRef:
name: logmower-readwrite-password
roles:
- name: readWrite
db: application
scramCredentialsSecretName: logmower-readwrite
- name: readonly
db: application
passwordSecretRef:
name: logmower-readonly-password
roles:
- name: read
db: application
scramCredentialsSecretName: logmower-readonly
statefulSet:
spec:
logLevel: WARN
template:
spec:
containers:
- name: mongod
resources:
requests:
cpu: 100m
memory: 1Gi
limits:
cpu: 4000m
memory: 1Gi
volumeMounts:
- name: journal-volume
mountPath: /data/journal
- name: mongodb-agent
resources:
requests:
cpu: 1m
memory: 100Mi
limits: {}
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- logmower-mongodb-svc
topologyKey: kubernetes.io/hostname
nodeSelector:
dedicated: monitoring
tolerations:
- key: dedicated
operator: Equal
value: monitoring
effect: NoSchedule
volumeClaimTemplates:
- metadata:
name: logs-volume
labels:
usecase: logs
spec:
storageClassName: mongo
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Mi
- metadata:
name: journal-volume
labels:
usecase: journal
spec:
storageClassName: mongo
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 512Mi
- metadata:
name: data-volume
labels:
usecase: data
spec:
storageClassName: mongo
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: logmower-shipper
spec:
updateStrategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 50%
selector:
matchLabels:
app: logmower-shipper
template:
metadata:
labels:
app: logmower-shipper
spec:
serviceAccountName: logmower-shipper
containers:
- name: logmower-shipper
image: harbor.k-space.ee/k-space/logmower-shipper-prototype:latest
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: MONGO_URI
valueFrom:
secretKeyRef:
name: logmower-mongodb-application-readwrite
key: connectionString.standard
ports:
- containerPort: 8000
name: metrics
securityContext:
readOnlyRootFilesystem: true
command:
- /app/log_shipper.py
- --parse-json
- --normalize-log-level
- --stream-to-log-level
- --merge-top-level
- --max-collection-size
- "10000000000"
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 100Mi
volumeMounts:
- name: etcmachineid
mountPath: /etc/machine-id
readOnly: true
- name: varlog
mountPath: /var/log
readOnly: true
volumes:
- name: etcmachineid
hostPath:
path: /etc/machine-id
- name: varlog
hostPath:
path: /var/log
tolerations:
- operator: "Exists"
effect: "NoSchedule"
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: logging-logmower-shipper
subjects:
- kind: ServiceAccount
name: logmower-shipper
namespace: logmower
roleRef:
kind: ClusterRole
name: filebeat
apiGroup: rbac.authorization.k8s.io
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: logmower-shipper
labels:
app: logmower-shipper
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: logmower-shipper
spec:
podSelector:
matchLabels:
app: logmower-shipper
policyTypes:
- Ingress
- Egress
ingress:
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: prometheus-operator
podSelector:
matchLabels:
app.kubernetes.io/name: prometheus
egress:
- to:
- podSelector:
matchLabels:
app: logmower-mongodb-svc
ports:
- port: 27017
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: logmower-eventsource
spec:
podSelector:
matchLabels:
app: logmower-eventsource
policyTypes:
- Ingress
- Egress
egress:
- to:
- podSelector:
matchLabels:
app: logmower-mongodb-svc
ingress:
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: traefik
podSelector:
matchLabels:
app.kubernetes.io/name: traefik
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: logmower-frontend
spec:
podSelector:
matchLabels:
app: logmower-frontend
policyTypes:
- Ingress
- Egress
ingress:
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: traefik
podSelector:
matchLabels:
app.kubernetes.io/name: traefik
---
apiVersion: monitoring.coreos.com/v1
kind: PodMonitor
metadata:
name: logmower-shipper
spec:
selector:
matchLabels:
app: logmower-shipper
podMetricsEndpoints:
- port: metrics
---
apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
name: logmower-shipper
spec:
groups:
- name: logmower-shipper
rules:
- alert: LogmowerSingleInsertionErrors
annotations:
summary: Logmower shipper is having issues submitting log records
to database
expr: rate(logmower_insertion_error_count_total[30m]) > 0
for: 0m
labels:
severity: warning
- alert: LogmowerBulkInsertionErrors
annotations:
summary: Logmower shipper is having issues submitting log records
to database
expr: rate(logmower_bulk_insertion_error_count_total[30m]) > 0
for: 0m
labels:
severity: warning
- alert: LogmowerHighDatabaseLatency
annotations:
summary: Database operations are slow
expr: histogram_quantile(0.95, logmower_database_operation_latency_bucket) > 10
for: 1m
labels:
severity: warning
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: logmower
annotations:
kubernetes.io/ingress.class: traefik
cert-manager.io/cluster-issuer: default
traefik.ingress.kubernetes.io/router.entrypoints: websecure
traefik.ingress.kubernetes.io/router.middlewares: traefik-sso@kubernetescrd
traefik.ingress.kubernetes.io/router.tls: "true"
external-dns.alpha.kubernetes.io/target: traefik.k-space.ee
spec:
rules:
- host: log.k-space.ee
http:
paths:
- pathType: Prefix
path: "/events"
backend:
service:
name: logmower-eventsource
port:
number: 3002
- pathType: Prefix
path: "/"
backend:
service:
name: logmower-frontend
port:
number: 8080
tls:
- hosts:
- "*.k-space.ee"
---
apiVersion: v1
kind: Service
metadata:
name: logmower-eventsource
spec:
type: ClusterIP
selector:
app: logmower-eventsource
ports:
- protocol: TCP
port: 3002
---
apiVersion: v1
kind: Service
metadata:
name: logmower-frontend
spec:
type: ClusterIP
selector:
app: logmower-frontend
ports:
- protocol: TCP
port: 8080
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: logmower-frontend
spec:
selector:
matchLabels:
app: logmower-frontend
template:
metadata:
labels:
app: logmower-frontend
spec:
containers:
- name: logmower-frontend
image: harbor.k-space.ee/k-space/logmower-frontend
ports:
- containerPort: 8080
name: http
securityContext:
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 1000
resources:
limits:
memory: 50Mi
requests:
cpu: 1m
memory: 20Mi
volumeMounts:
- name : nginx-cache
mountPath: /var/cache/nginx/
- name : nginx-config
mountPath: /var/config/nginx/
- name: var-run
mountPath: /var/run/
volumes:
- emptyDir: {}
name: nginx-cache
- emptyDir: {}
name: nginx-config
- emptyDir: {}
name: var-run
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: logmower-eventsource
spec:
selector:
matchLabels:
app: logmower-eventsource
template:
metadata:
labels:
app: logmower-eventsource
spec:
containers:
- name: logmower-eventsource
image: harbor.k-space.ee/k-space/logmower-eventsource
ports:
- containerPort: 3002
name: nodejs
securityContext:
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 1000
resources:
limits:
cpu: 500m
memory: 200Mi
requests:
cpu: 10m
memory: 100Mi
env:
- name: MONGODB_HOST
valueFrom:
secretKeyRef:
name: logmower-mongodb-application-readonly
key: connectionString.standard
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: logmower-mongodb
spec:
podSelector:
matchLabels:
app: logmower-mongodb-svc
policyTypes:
- Ingress
- Egress
ingress:
- from:
- podSelector: {}
ports:
- port: 27017
egress:
- to:
- podSelector:
matchLabels:
app: logmower-mongodb-svc
ports:
- port: 27017

47
logmower/mongoexpress.yml Normal file
View File

@ -0,0 +1,47 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: logmower-mongoexpress
spec:
revisionHistoryLimit: 0
replicas: 1
selector:
matchLabels:
app: logmower-mongoexpress
template:
metadata:
labels:
app: logmower-mongoexpress
spec:
containers:
- name: mongoexpress
image: mongo-express
ports:
- name: mongoexpress
containerPort: 8081
env:
- name: ME_CONFIG_MONGODB_URL
valueFrom:
secretKeyRef:
name: logmower-mongodb-application-readonly
key: connectionString.standard
- name: ME_CONFIG_MONGODB_ENABLE_ADMIN
value: "true"
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: logmower-mongoexpress
spec:
podSelector:
matchLabels:
app: logmower-mongoexpress
policyTypes:
- Egress
egress:
- to:
- podSelector:
matchLabels:
app: logmower-mongodb-svc
ports:
- port: 27017

View File

@ -0,0 +1 @@
../shared/networkpolicy-base.yml

View File

@ -1,8 +1,8 @@
# Longhorn distributed block storage system
The manifest was fetched from
https://raw.githubusercontent.com/longhorn/longhorn/v1.2.4/deploy/longhorn.yaml
and then heavily modified.
https://raw.githubusercontent.com/longhorn/longhorn/v1.4.0/deploy/longhorn.yaml
and then heavily modified as per `changes.diff`
To deploy Longhorn use following:

View File

@ -5,7 +5,6 @@ metadata:
namespace: longhorn-system
annotations:
kubernetes.io/ingress.class: traefik
cert-manager.io/cluster-issuer: default
external-dns.alpha.kubernetes.io/target: traefik.k-space.ee
traefik.ingress.kubernetes.io/router.entrypoints: websecure
traefik.ingress.kubernetes.io/router.middlewares: traefik-sso@kubernetescrd
@ -24,9 +23,7 @@ spec:
number: 80
tls:
- hosts:
- longhorn.k-space.ee
secretName: longhorn-tls
- "*.k-space.ee"
---
apiVersion: monitoring.coreos.com/v1
kind: PodMonitor

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,92 @@
--- ref 2023-02-20 11:15:07.340650467 +0200
+++ application.yml 2023-02-19 18:38:05.059234209 +0200
@@ -60,14 +60,14 @@
storageclass.kubernetes.io/is-default-class: "true"
provisioner: driver.longhorn.io
allowVolumeExpansion: true
- reclaimPolicy: "Delete"
+ reclaimPolicy: "Retain"
volumeBindingMode: Immediate
parameters:
- numberOfReplicas: "3"
+ numberOfReplicas: "2"
staleReplicaTimeout: "30"
fromBackup: ""
- fsType: "ext4"
- dataLocality: "disabled"
+ fsType: "xfs"
+ dataLocality: "best-effort"
---
# Source: longhorn/templates/crds.yaml
apiVersion: apiextensions.k8s.io/v1
@@ -3869,6 +3869,11 @@
app.kubernetes.io/version: v1.4.0
app: longhorn-manager
spec:
+ tolerations:
+ - key: dedicated
+ operator: Equal
+ value: storage
+ effect: NoSchedule
initContainers:
- name: wait-longhorn-admission-webhook
image: longhornio/longhorn-manager:v1.4.0
@@ -3968,6 +3973,10 @@
app.kubernetes.io/version: v1.4.0
app: longhorn-driver-deployer
spec:
+ tolerations:
+ - key: dedicated
+ operator: Equal
+ value: storage
initContainers:
- name: wait-longhorn-manager
image: longhornio/longhorn-manager:v1.4.0
@@ -4037,6 +4046,11 @@
app.kubernetes.io/version: v1.4.0
app: longhorn-recovery-backend
spec:
+ tolerations:
+ - key: dedicated
+ operator: Equal
+ value: storage
+ effect: NoSchedule
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
@@ -4103,6 +4117,11 @@
app.kubernetes.io/version: v1.4.0
app: longhorn-ui
spec:
+ tolerations:
+ - key: dedicated
+ operator: Equal
+ value: storage
+ effect: NoSchedule
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
@@ -4166,6 +4185,11 @@
app.kubernetes.io/version: v1.4.0
app: longhorn-conversion-webhook
spec:
+ tolerations:
+ - key: dedicated
+ operator: Equal
+ value: storage
+ effect: NoSchedule
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
@@ -4226,6 +4250,11 @@
app.kubernetes.io/version: v1.4.0
app: longhorn-admission-webhook
spec:
+ tolerations:
+ - key: dedicated
+ operator: Equal
+ value: storage
+ effect: NoSchedule
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:

158
member-site/doorboy.yml Normal file
View File

@ -0,0 +1,158 @@
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: doorboy-proxy
annotations:
keel.sh/policy: force
keel.sh/trigger: poll
spec:
revisionHistoryLimit: 0
replicas: 3
selector:
matchLabels: &selectorLabels
app.kubernetes.io/name: doorboy-proxy
template:
metadata:
labels: *selectorLabels
spec:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- podAffinityTerm:
labelSelector:
matchExpressions:
- key: app.kubernetes.io/name
operator: In
values:
- doorboy-proxy
topologyKey: kubernetes.io/hostname
weight: 100
containers:
- name: doorboy-proxy
image: harbor.k-space.ee/k-space/doorboy-proxy:latest
envFrom:
- secretRef:
name: doorboy-api
env:
- name: MONGO_URI
valueFrom:
secretKeyRef:
name: mongo-application-readwrite
key: connectionString.standard
securityContext:
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 1000
ports:
- containerPort: 5000
name: "http"
resources:
requests:
memory: "200Mi"
cpu: "100m"
limits:
memory: "500Mi"
cpu: "1"
---
apiVersion: v1
kind: Service
metadata:
name: doorboy-proxy
spec:
selector:
app.kubernetes.io/name: doorboy-proxy
ports:
- protocol: TCP
name: http
port: 5000
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: doorboy-proxy
annotations:
kubernetes.io/ingress.class: traefik
traefik.ingress.kubernetes.io/router.entrypoints: websecure
traefik.ingress.kubernetes.io/router.tls: "true"
external-dns.alpha.kubernetes.io/target: traefik.k-space.ee
spec:
rules:
- host: doorboy-proxy.k-space.ee
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: doorboy-proxy
port:
name: http
tls:
- hosts:
- "*.k-space.ee"
---
apiVersion: monitoring.coreos.com/v1
kind: PodMonitor
metadata:
name: doorboy-proxy
spec:
selector:
matchLabels:
app.kubernetes.io/name: doorboy-proxy
podMetricsEndpoints:
- port: http
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kdoorpi
spec:
selector:
matchLabels: &selectorLabels
app.kubernetes.io/name: kdoorpi
template:
metadata:
labels: *selectorLabels
spec:
containers:
- name: kdoorpi
image: harbor.k-space.ee/k-space/kdoorpi:latest
env:
- name: KDOORPI_API_ALLOWED
value: https://doorboy-proxy.k-space.ee/allowed
- name: KDOORPI_API_LONGPOLL
value: https://doorboy-proxy.k-space.ee/longpoll
- name: KDOORPI_API_SWIPE
value: http://172.21.99.98/swipe
- name: KDOORPI_DOOR
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: KDOORPI_API_KEY
valueFrom:
secretKeyRef:
name: doorboy-api
key: DOORBOY_SECRET
- name: KDOORPI_UID_SALT
valueFrom:
secretKeyRef:
name: doorboy-uid-hash-salt
key: KDOORPI_UID_SALT
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 100Mi
nodeSelector:
dedicated: door
tolerations:
- key: dedicated
operator: Equal
value: door
effect: NoSchedule
- key: arch
operator: Equal
value: arm64
effect: NoSchedule

View File

@ -1,11 +0,0 @@
# meta-operator
Meta operator enables creating operators without building any binaries or
Docker images.
For example operator declaration see `keydb.yml`
```
kubectl create namespace meta-operator
kubectl apply -f application.yml -f keydb.yml
```

View File

@ -1,220 +0,0 @@
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: clusteroperators.codemowers.io
spec:
group: codemowers.io
names:
plural: clusteroperators
singular: clusteroperator
kind: ClusterOperator
shortNames:
- clusteroperator
scope: Cluster
versions:
- name: v1alpha1
served: true
storage: true
schema:
openAPIV3Schema:
type: object
properties:
spec:
type: object
properties:
resource:
type: object
properties:
group:
type: string
version:
type: string
plural:
type: string
secret:
type: object
properties:
name:
type: string
enabled:
type: boolean
structure:
type: array
items:
type: object
properties:
key:
type: string
value:
type: string
services:
type: array
items:
type: object
x-kubernetes-preserve-unknown-fields: true
deployments:
type: array
items:
type: object
x-kubernetes-preserve-unknown-fields: true
statefulsets:
type: array
items:
type: object
x-kubernetes-preserve-unknown-fields: true
configmaps:
type: array
items:
type: object
x-kubernetes-preserve-unknown-fields: true
customresources:
type: array
items:
type: object
x-kubernetes-preserve-unknown-fields: true
required: ["spec"]
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: meta-operator
namespace: meta-operator
labels:
app.kubernetes.io/name: meta-operator
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: meta-operator
template:
metadata:
labels:
app.kubernetes.io/name: meta-operator
spec:
serviceAccountName: meta-operator
containers:
- name: meta-operator
image: harbor.k-space.ee/k-space/meta-operator
securityContext:
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 1000
env:
- name: MY_POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
---
apiVersion: codemowers.io/v1alpha1
kind: ClusterOperator
metadata:
name: meta
spec:
resource:
group: codemowers.io
version: v1alpha1
plural: clusteroperators
secret:
enabled: false
deployments:
- apiVersion: apps/v1
kind: Deployment
metadata:
name: foobar-operator
labels:
app.kubernetes.io/name: foobar-operator
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: foobar-operator
template:
metadata:
labels:
app.kubernetes.io/name: foobar-operator
spec:
serviceAccountName: meta-operator
containers:
- name: meta-operator
image: harbor.k-space.ee/k-space/meta-operator
command:
- /meta-operator.py
- --target
- foobar
securityContext:
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 1000
env:
- name: MY_POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: meta-operator
rules:
- apiGroups:
- ""
resources:
- secrets
- configmaps
- services
verbs:
- create
- get
- patch
- update
- delete
- list
- apiGroups:
- apps
resources:
- deployments
- statefulsets
verbs:
- create
- delete
- list
- update
- patch
- apiGroups:
- codemowers.io
resources:
- bindzones
- clusteroperators
- keydbs
verbs:
- get
- list
- watch
- apiGroups:
- k-space.ee
resources:
- cams
verbs:
- get
- list
- watch
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: meta-operator
namespace: meta-operator
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: meta-operator
subjects:
- kind: ServiceAccount
name: meta-operator
namespace: meta-operator
roleRef:
kind: ClusterRole
name: meta-operator
apiGroup: rbac.authorization.k8s.io

View File

@ -1,253 +0,0 @@
---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: keydbs.codemowers.io
spec:
group: codemowers.io
names:
plural: keydbs
singular: keydb
kind: KeyDBCluster
shortNames:
- keydb
scope: Namespaced
versions:
- name: v1alpha1
served: true
storage: true
schema:
openAPIV3Schema:
type: object
properties:
spec:
type: object
properties:
replicas:
type: integer
description: Replica count
required: ["spec"]
---
apiVersion: codemowers.io/v1alpha1
kind: ClusterOperator
metadata:
name: keydb
spec:
resource:
group: codemowers.io
version: v1alpha1
plural: keydbs
secret:
enabled: true
name: foobar-secrets
structure:
- key: REDIS_PASSWORD
value: "%s"
- key: REDIS_URI
value: "redis://:%s@foobar"
configmaps:
- apiVersion: v1
kind: ConfigMap
metadata:
name: foobar-scripts
labels:
app.kubernetes.io/name: foobar
data:
entrypoint.sh: |
#!/bin/bash
set -euxo pipefail
host="$(hostname)"
port="6379"
replicas=()
for node in {0..2}; do
if [ "${host}" != "redis-${node}" ]; then
replicas+=("--replicaof redis-${node}.redis-headless ${port}")
fi
done
exec keydb-server /etc/keydb/redis.conf \
--active-replica "yes" \
--multi-master "yes" \
--appendonly "no" \
--bind "0.0.0.0" \
--port "${port}" \
--protected-mode "no" \
--server-threads "2" \
--masterauth "${REDIS_PASSWORD}" \
--requirepass "${REDIS_PASSWORD}" \
"${replicas[@]}"
ping_readiness_local.sh: |-
#!/bin/bash
set -e
[[ -n "${REDIS_PASSWORD}" ]] && export REDISCLI_AUTH="${REDIS_PASSWORD}"
response="$(
timeout -s 3 "${1}" \
keydb-cli \
-h localhost \
-p 6379 \
ping
)"
if [ "${response}" != "PONG" ]; then
echo "${response}"
exit 1
fi
ping_liveness_local.sh: |-
#!/bin/bash
set -e
[[ -n "${REDIS_PASSWORD}" ]] && export REDISCLI_AUTH="${REDIS_PASSWORD}"
response="$(
timeout -s 3 "${1}" \
keydb-cli \
-h localhost \
-p 6379 \
ping
)"
if [ "${response}" != "PONG" ] && [[ ! "${response}" =~ ^.*LOADING.*$ ]]; then
echo "${response}"
exit 1
fi
cleanup_tempfiles.sh: |-
#!/bin/bash
set -e
find /data/ -type f \( -name "temp-*.aof" -o -name "temp-*.rdb" \) -mmin +60 -delete
services:
- apiVersion: v1
kind: Service
metadata:
name: foobar-headless
labels:
app.kubernetes.io/name: foobar
spec:
type: ClusterIP
clusterIP: None
ports:
- name: redis
port: 6379
protocol: TCP
targetPort: redis
selector:
app.kubernetes.io/name: foobar
- apiVersion: v1
kind: Service
metadata:
name: foobar
labels:
app.kubernetes.io/name: foobar
annotations:
{}
spec:
type: ClusterIP
ports:
- name: redis
port: 6379
protocol: TCP
targetPort: redis
- name: exporter
port: 9121
protocol: TCP
targetPort: exporter
selector:
app.kubernetes.io/name: foobar
sessionAffinity: ClientIP
statefulsets:
- apiVersion: apps/v1
kind: StatefulSet
metadata:
name: foobar
labels:
app.kubernetes.io/name: foobar
spec:
replicas: 3
serviceName: foobar-headless
selector:
matchLabels:
app.kubernetes.io/name: foobar
template:
metadata:
labels:
app.kubernetes.io/name: foobar
spec:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- podAffinityTerm:
labelSelector:
matchExpressions:
- key: app.kubernetes.io/name
operator: In
values:
- 'foobar'
topologyKey: kubernetes.io/hostname
weight: 100
containers:
- name: redis
image: eqalpha/keydb:x86_64_v6.3.1
imagePullPolicy: Always
command:
- /scripts/entrypoint.sh
ports:
- name: redis
containerPort: 6379
protocol: TCP
livenessProbe:
initialDelaySeconds: 20
periodSeconds: 5
# One second longer than command timeout should prevent generation of zombie processes.
timeoutSeconds: 6
successThreshold: 1
failureThreshold: 5
exec:
command:
- sh
- -c
- /scripts/ping_liveness_local.sh 5
readinessProbe:
initialDelaySeconds: 20
periodSeconds: 5
# One second longer than command timeout should prevent generation of zombie processes.
timeoutSeconds: 2
successThreshold: 1
failureThreshold: 5
exec:
command:
- sh
- -c
- /scripts/ping_readiness_local.sh 1
startupProbe:
periodSeconds: 5
# One second longer than command timeout should prevent generation of zombie processes.
timeoutSeconds: 2
failureThreshold: 24
exec:
command:
- sh
- -c
- /scripts/ping_readiness_local.sh 1
resources:
{}
securityContext:
{}
volumeMounts:
- name: foobar-scripts
mountPath: /scripts
- name: foobar-data
mountPath: /data
envFrom:
- secretRef:
name: foobar-secrets
- name: exporter
image: quay.io/oliver006/redis_exporter
ports:
- name: exporter
containerPort: 9121
envFrom:
- secretRef:
name: foobar-secrets
securityContext:
{}
volumes:
- name: foobar-scripts
configMap:
name: foobar-scripts
defaultMode: 0755
- name: foobar-data
emptyDir: {}

View File

@ -12,5 +12,7 @@ spec:
to:
- namespaceSelector: {}
ports:
- protocol: TCP
port: 33060
- protocol: TCP
port: 3306

View File

@ -559,10 +559,10 @@ metadata:
name: mysql-operator
namespace: mysql-operator
labels:
version: "8.0.30-2.0.5"
version: "8.0.30-2.0.6"
app.kubernetes.io/name: mysql-operator
app.kubernetes.io/instance: mysql-operator
app.kubernetes.io/version: "8.0.30-2.0.5"
app.kubernetes.io/version: "8.0.30-2.0.6"
app.kubernetes.io/component: controller
app.kubernetes.io/managed-by: helm
app.kubernetes.io/created-by: helm
@ -578,7 +578,7 @@ spec:
spec:
containers:
- name: mysql-operator
image: mysql/mysql-operator:8.0.30-2.0.5
image: mysql/mysql-operator:8.0.30-2.0.6
imagePullPolicy: IfNotPresent
args: ["mysqlsh", "--log-level=@INFO", "--pym", "mysqloperator", "operator"]
env:

9
nyancat/README.md Normal file
View File

@ -0,0 +1,9 @@
# Nyancat server deployment
Something silly for a change.
To connect use:
```
telnet nyancat.k-space.ee
```

49
nyancat/application.yaml Normal file
View File

@ -0,0 +1,49 @@
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nyancat
namespace: nyancat
spec:
replicas: 2
selector:
matchLabels:
app.kubernetes.io/name: nyancat
template:
metadata:
labels:
app.kubernetes.io/name: nyancat
spec:
containers:
- name: nyancat
image: harbor.k-space.ee/k-space/nyancat-server:latest
command:
- onenetd
- -v1
- "0"
- "2323"
- nyancat
- -I
- --telnet
securityContext:
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 65534
---
apiVersion: v1
kind: Service
metadata:
name: nyancat
namespace: nyancat
annotations:
metallb.universe.tf/address-pool: eenet
external-dns.alpha.kubernetes.io/hostname: nyancat.k-space.ee
spec:
type: LoadBalancer
externalTrafficPolicy: Local
selector:
app.kubernetes.io/name: nyancat
ports:
- protocol: TCP
port: 23
targetPort: 2323

11
openebs/README.md Normal file
View File

@ -0,0 +1,11 @@
# Raw file based local PV-s
We currently only use `rawfile-localpv` portion of OpenEBS.
The manifests were rendered using Helm template from https://github.com/openebs/rawfile-localpv
and subsequently modified
```
kubectl create namespace openebs
kubectl apply -n openebs -f rawfile.yaml
```

404
openebs/rawfile.yaml Normal file
View File

@ -0,0 +1,404 @@
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: rawfile-csi-driver
namespace: openebs
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: rawfile-csi-provisioner
rules:
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get", "list"]
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["list", "watch", "create", "update", "patch"]
- apiGroups: ["snapshot.storage.k8s.io"]
resources: ["volumesnapshots"]
verbs: ["get", "list"]
- apiGroups: ["snapshot.storage.k8s.io"]
resources: ["volumesnapshotcontents"]
verbs: ["get", "list"]
- apiGroups: ["storage.k8s.io"]
resources: ["csinodes"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get", "list", "watch"]
- apiGroups: ["storage.k8s.io"]
resources: ["volumeattachments"]
verbs: ["get", "list", "watch"]
- apiGroups: ["storage.k8s.io"]
resources: ["csistoragecapacities"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
- apiGroups: [""]
resources: ["pods"]
verbs: ["get"]
- apiGroups: ["apps"]
resources: ["daemonsets"]
verbs: ["get"]
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: rawfile-csi-broker
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get"]
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: rawfile-csi-resizer
rules:
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "patch"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["persistentvolumeclaims/status"]
verbs: ["patch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["list", "watch", "create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: rawfile-csi-provisioner
subjects:
- kind: ServiceAccount
name: rawfile-csi-driver
namespace: openebs
roleRef:
kind: ClusterRole
name: rawfile-csi-provisioner
apiGroup: rbac.authorization.k8s.io
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: rawfile-csi-broker
subjects:
- kind: ServiceAccount
name: rawfile-csi-driver
namespace: openebs
roleRef:
kind: ClusterRole
name: rawfile-csi-broker
apiGroup: rbac.authorization.k8s.io
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: rawfile-csi-resizer
subjects:
- kind: ServiceAccount
name: rawfile-csi-driver
namespace: openebs
roleRef:
kind: ClusterRole
name: rawfile-csi-resizer
apiGroup: rbac.authorization.k8s.io
---
apiVersion: v1
kind: Service
metadata:
name: rawfile-csi-controller
namespace: openebs
labels:
app.kubernetes.io/name: rawfile-csi
component: controller
spec:
type: ClusterIP
selector:
app.kubernetes.io/name: rawfile-csi
component: controller
clusterIP: None
---
apiVersion: v1
kind: Service
metadata:
name: rawfile-csi-node
namespace: openebs
labels:
app.kubernetes.io/name: rawfile-csi
component: node
spec:
type: ClusterIP
ports:
- name: metrics
port: 9100
targetPort: metrics
protocol: TCP
selector:
app.kubernetes.io/name: rawfile-csi
component: node
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: rawfile-csi-node
namespace: openebs
spec:
updateStrategy:
rollingUpdate:
maxUnavailable: "100%"
selector:
matchLabels: &selectorLabels
app.kubernetes.io/name: rawfile-csi
component: node
template:
metadata:
labels: *selectorLabels
spec:
serviceAccount: rawfile-csi-driver
priorityClassName: system-node-critical
tolerations:
- operator: "Exists"
volumes:
- name: registration-dir
hostPath:
path: /var/lib/kubelet/plugins_registry
type: Directory
- name: socket-dir
hostPath:
path: /var/lib/kubelet/plugins/rawfile-csi
type: DirectoryOrCreate
- name: mountpoint-dir
hostPath:
path: /var/lib/kubelet
type: DirectoryOrCreate
- name: data-dir
hostPath:
path: /var/csi/rawfile
type: DirectoryOrCreate
containers:
- name: csi-driver
image: "harbor.k-space.ee/k-space/rawfile-localpv:latest"
imagePullPolicy: Always
securityContext:
privileged: true
env:
- name: PROVISIONER_NAME
value: "rawfile.csi.openebs.io"
- name: CSI_ENDPOINT
value: unix:///csi/csi.sock
- name: IMAGE_REPOSITORY
value: "harbor.k-space.ee/k-space/rawfile-localpv"
- name: IMAGE_TAG
value: "latest"
- name: NODE_ID
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.nodeName
ports:
- name: metrics
containerPort: 9100
- name: csi-probe
containerPort: 9808
volumeMounts:
- name: socket-dir
mountPath: /csi
- name: mountpoint-dir
mountPath: /var/lib/kubelet
mountPropagation: "Bidirectional"
- name: data-dir
mountPath: /data
resources:
limits:
cpu: 1
memory: 100Mi
requests:
cpu: 10m
memory: 100Mi
- name: node-driver-registrar
image: k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0
imagePullPolicy: IfNotPresent
args:
- --csi-address=$(ADDRESS)
- --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)
- --health-port=9809
env:
- name: ADDRESS
value: /csi/csi.sock
- name: DRIVER_REG_SOCK_PATH
value: /var/lib/kubelet/plugins/rawfile-csi/csi.sock
ports:
- containerPort: 9809
name: healthz
livenessProbe:
httpGet:
path: /healthz
port: healthz
initialDelaySeconds: 5
timeoutSeconds: 5
volumeMounts:
- name: socket-dir
mountPath: /csi
- name: registration-dir
mountPath: /registration
resources:
limits:
cpu: 500m
memory: 100Mi
requests:
cpu: 10m
memory: 100Mi
- name: external-provisioner
image: k8s.gcr.io/sig-storage/csi-provisioner:v2.2.2
imagePullPolicy: IfNotPresent
args:
- "--csi-address=$(ADDRESS)"
- "--feature-gates=Topology=true"
- "--strict-topology"
- "--immediate-topology=false"
- "--timeout=120s"
- "--enable-capacity=true"
- "--capacity-ownerref-level=1" # DaemonSet
- "--node-deployment=true"
env:
- name: ADDRESS
value: /csi/csi.sock
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
volumeMounts:
- name: socket-dir
mountPath: /csi
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: rawfile-csi-controller
namespace: openebs
spec:
replicas: 1
serviceName: rawfile-csi
selector:
matchLabels: &selectorLabels
app.kubernetes.io/name: rawfile-csi
component: controller
template:
metadata:
labels: *selectorLabels
spec:
serviceAccount: rawfile-csi-driver
priorityClassName: system-cluster-critical
tolerations:
- key: "node-role.kubernetes.io/master"
operator: Equal
value: "true"
effect: NoSchedule
volumes:
- name: socket-dir
emptyDir: {}
containers:
- name: csi-driver
image: "harbor.k-space.ee/k-space/rawfile-localpv"
imagePullPolicy: Always
args:
- csi-driver
- --disable-metrics
env:
- name: PROVISIONER_NAME
value: "rawfile.csi.openebs.io"
- name: CSI_ENDPOINT
value: unix:///csi/csi.sock
- name: IMAGE_REPOSITORY
value: "harbor.k-space.ee/k-space/rawfile-localpv"
- name: IMAGE_TAG
value: "latest"
volumeMounts:
- name: socket-dir
mountPath: /csi
ports:
- name: csi-probe
containerPort: 9808
resources:
limits:
cpu: 1
memory: 100Mi
requests:
cpu: 10m
memory: 100Mi
- name: external-resizer
image: k8s.gcr.io/sig-storage/csi-resizer:v1.4.0
imagePullPolicy: IfNotPresent
args:
- "--csi-address=$(ADDRESS)"
- "--handle-volume-inuse-error=false"
env:
- name: ADDRESS
value: /csi/csi.sock
volumeMounts:
- name: socket-dir
mountPath: /csi
---
apiVersion: storage.k8s.io/v1
kind: CSIDriver
metadata:
name: rawfile.csi.openebs.io
spec:
attachRequired: false
podInfoOnMount: true
fsGroupPolicy: File
storageCapacity: true
volumeLifecycleModes:
- Persistent
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: rawfile-ext4
provisioner: rawfile.csi.openebs.io
reclaimPolicy: Retain
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true
parameters:
fsType: "ext4"
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: rawfile-xfs
provisioner: rawfile.csi.openebs.io
reclaimPolicy: Retain
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true
parameters:
fsType: "xfs"

View File

@ -26,7 +26,9 @@ spec:
- name: PMA_ARBITRARY
value: "1"
- name: PMA_HOSTS
value: mysql-cluster.etherpad.svc.cluster.local,mariadb.authelia,mariadb.nextcloud,172.20.36.1
value: mysql-cluster.authelia,mysql-cluster.etherpad,mariadb.authelia,mariadb.nextcloud,172.20.36.1
- name: PMA_PORTS
value: 6446,6446,3306,3306,3306
- name: PMA_ABSOLUTE_URI
value: https://phpmyadmin.k-space.ee/
- name: UPLOAD_LIMIT
@ -38,7 +40,6 @@ metadata:
name: phpmyadmin
annotations:
kubernetes.io/ingress.class: traefik
cert-manager.io/cluster-issuer: default
traefik.ingress.kubernetes.io/router.entrypoints: websecure
traefik.ingress.kubernetes.io/router.middlewares: traefik-sso@kubernetescrd
traefik.ingress.kubernetes.io/router.tls: "true"
@ -57,8 +58,7 @@ spec:
number: 80
tls:
- hosts:
- phpmyadmin.k-space.ee
secretName: phpmyadmin-tls
- "*.k-space.ee"
---
apiVersion: v1
kind: Service
@ -98,7 +98,7 @@ spec:
to:
- namespaceSelector: {}
ports:
- port: 3306
- port: 6446
- # Allow connecting to any MySQL instance outside the cluster
to:
- ipBlock:

10
playground/README.md Normal file
View File

@ -0,0 +1,10 @@
# Playground
Playground namespace is accessible to `Developers` AD group.
Novel log aggregator is being developer in this namespace:
```
kubectl create secret generic -n playground mongodb-application-readwrite-password --from-literal="password=$(cat /dev/urandom | base64 | head -c 30)"
kubectl create secret generic -n playground mongodb-application-readonly-password --from-literal="password=$(cat /dev/urandom | base64 | head -c 30)"
kubectl apply -n playground -f logging.yml -f mongodb-support.yml -f mongoexpress.yml -f networkpolicy-base.yml

263
playground/logging.yml Normal file
View File

@ -0,0 +1,263 @@
---
apiVersion: mongodbcommunity.mongodb.com/v1
kind: MongoDBCommunity
metadata:
name: mongodb
spec:
additionalMongodConfig:
systemLog:
quiet: true
members: 3
type: ReplicaSet
version: "5.0.13"
security:
authentication:
modes: ["SCRAM"]
users:
- name: readwrite
db: application
passwordSecretRef:
name: mongodb-application-readwrite-password
roles:
- name: readWrite
db: application
scramCredentialsSecretName: mongodb-application-readwrite
- name: readonly
db: application
passwordSecretRef:
name: mongodb-application-readonly-password
roles:
- name: readOnly
db: application
scramCredentialsSecretName: mongodb-application-readonly
statefulSet:
spec:
logLevel: WARN
template:
spec:
containers:
- name: mongod
resources:
requests:
cpu: 100m
memory: 2Gi
limits:
cpu: 2000m
memory: 2Gi
- name: mongodb-agent
resources:
requests:
cpu: 1m
memory: 100Mi
limits: {}
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- mongodb-svc
topologyKey: kubernetes.io/hostname
nodeSelector:
dedicated: monitoring
tolerations:
- key: dedicated
operator: Equal
value: monitoring
effect: NoSchedule
volumeClaimTemplates:
- metadata:
name: logs-volume
spec:
storageClassName: local-path
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 512Mi
- metadata:
name: data-volume
spec:
storageClassName: local-path
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: log-shipper
spec:
updateStrategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 50%
selector:
matchLabels:
app: log-shipper
template:
metadata:
labels:
app: log-shipper
spec:
serviceAccountName: log-shipper
containers:
- name: log-shipper
image: harbor.k-space.ee/k-space/log-shipper
securityContext:
runAsUser: 0
env:
- name: MY_POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: MONGODB_HOST
valueFrom:
secretKeyRef:
name: mongodb-application-readwrite
key: connectionString.standard
ports:
- containerPort: 8000
name: metrics
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 100Mi
volumeMounts:
- name: etcmachineid
mountPath: /etc/machine-id
readOnly: true
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
- name: varlog
mountPath: /var/log
readOnly: true
volumes:
- name: etcmachineid
hostPath:
path: /etc/machine-id
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
- name: varlog
hostPath:
path: /var/log
tolerations:
- operator: "Exists"
effect: "NoExecute"
- operator: "Exists"
effect: "NoSchedule"
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: logging-log-shipper
subjects:
- kind: ServiceAccount
name: log-shipper
namespace: playground
roleRef:
kind: ClusterRole
name: filebeat
apiGroup: rbac.authorization.k8s.io
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: log-shipper
labels:
app: log-shipper
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: log-shipper
spec:
podSelector:
matchLabels:
app: log-shipper
policyTypes:
- Ingress
- Egress
ingress:
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: prometheus-operator
podSelector:
matchLabels:
app.kubernetes.io/name: prometheus
egress:
- to:
- podSelector:
matchLabels:
app: mongodb-svc
ports:
- port: 27017
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: log-viewer-backend
spec:
podSelector:
matchLabels:
app: log-viewer-backend
policyTypes:
- Ingress
- Egress
egress:
- to:
- podSelector:
matchLabels:
app: mongodb-svc
ingress:
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: traefik
podSelector:
matchLabels:
app.kubernetes.io/name: traefik
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: log-viewer-frontend
spec:
podSelector:
matchLabels:
app: log-viewer-frontend
policyTypes:
- Ingress
- Egress
ingress:
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: traefik
podSelector:
matchLabels:
app.kubernetes.io/name: traefik
---
apiVersion: monitoring.coreos.com/v1
kind: PodMonitor
metadata:
name: log-shipper
spec:
selector:
matchLabels:
app: log-shipper
podMetricsEndpoints:
- port: metrics

View File

@ -0,0 +1 @@
../mongodb-operator/mongodb-support.yml

1
playground/mongoexpress.yml Symbolic link
View File

@ -0,0 +1 @@
../shared/mongoexpress.yml

View File

@ -0,0 +1 @@
../shared/networkpolicy-base.yml

1
prometheus-operator/.gitignore vendored Normal file
View File

@ -0,0 +1 @@
bundle.yml

View File

@ -1,7 +1,7 @@
# Prometheus operator
```
curl -L https://github.com/prometheus-operator/prometheus-operator/releases/download/v0.59.0/bundle.yaml | sed -e 's/namespace: default/namespace: prometheus-operator/g' > bundle.yml
curl -L https://github.com/prometheus-operator/prometheus-operator/releases/download/v0.61.1/bundle.yaml | sed -e 's/namespace: default/namespace: prometheus-operator/g' > bundle.yml
kubectl create namespace prometheus-operator
kubectl apply --server-side -n prometheus-operator -f bundle.yml
kubectl delete -n prometheus-operator configmap snmp-exporter
@ -9,7 +9,16 @@ kubectl create -n prometheus-operator configmap snmp-exporter --from-file=snmp.y
kubectl apply -n prometheus-operator -f application.yml -f node-exporter.yml -f blackbox-exporter.yml -f snmp-exporter.yml -f mikrotik-exporter.yml
```
# Mikrotik expoeter
# Slack
```
kubectl create -n prometheus-operator secret generic slack-secrets \
--from-literal=webhook-url=https://hooks.slack.com/services/...
```
# Mikrotik exporter
```
kubectl create -n prometheus-operator secret generic mikrotik-exporter \

View File

@ -1,4 +1,29 @@
---
apiVersion: monitoring.coreos.com/v1alpha1
kind: AlertmanagerConfig
metadata:
name: alertmanager
labels:
app.kubernetes.io/name: alertmanager
spec:
route:
routes:
- continue: false
receiver: slack-notifications
matchers:
- matchType: "="
name: severity
value: critical
receiver: 'null'
receivers:
- name: 'slack-notifications'
slackConfigs:
- channel: '#kube-prod'
sendResolved: true
apiURL:
name: slack-secrets
key: webhook-url
---
apiVersion: monitoring.coreos.com/v1
kind: PodMonitor
metadata:
@ -15,6 +40,14 @@ kind: Alertmanager
metadata:
name: alertmanager
spec:
alertmanagerConfigMatcherStrategy:
type: None
alertmanagerConfigNamespaceSelector: {}
alertmanagerConfigSelector: {}
alertmanagerConfiguration:
name: alertmanager
secrets:
- slack-secrets
nodeSelector:
dedicated: monitoring
tolerations:
@ -52,10 +85,8 @@ spec:
alerting:
alertmanagers:
- namespace: prometheus-operator
name: alertmanager
port: http
pathPrefix: "/"
apiVersion: v2
name: alertmanager-operated
port: web
externalUrl: "http://prom.k-space.ee/"
replicas: 2
shards: 1
@ -73,7 +104,7 @@ spec:
probeSelector: {}
ruleNamespaceSelector: {}
ruleSelector: {}
retentionSize: 80GB
retentionSize: 8GB
storage:
volumeClaimTemplate:
spec:
@ -81,7 +112,7 @@ spec:
- ReadWriteOnce
resources:
requests:
storage: 100Gi
storage: 10Gi
storageClassName: local-path
---
apiVersion: v1
@ -378,7 +409,6 @@ kind: Ingress
metadata:
name: prometheus
annotations:
cert-manager.io/cluster-issuer: default
traefik.ingress.kubernetes.io/router.entrypoints: websecure
traefik.ingress.kubernetes.io/router.tls: "true"
external-dns.alpha.kubernetes.io/target: traefik.k-space.ee
@ -397,15 +427,13 @@ spec:
number: 9090
tls:
- hosts:
- prom.k-space.ee
secretName: prom-tls
- "*.k-space.ee"
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: alertmanager
annotations:
cert-manager.io/cluster-issuer: default
traefik.ingress.kubernetes.io/router.entrypoints: websecure
traefik.ingress.kubernetes.io/router.tls: "true"
external-dns.alpha.kubernetes.io/target: traefik.k-space.ee
@ -424,8 +452,7 @@ spec:
number: 9093
tls:
- hosts:
- am.k-space.ee
secretName: alertmanager-tls
- "*.k-space.ee"
---
apiVersion: monitoring.coreos.com/v1
kind: PodMonitor
@ -487,276 +514,3 @@ spec:
selector:
matchLabels:
app.kubernetes.io/name: kubelet
---
apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
name: kube-state-metrics
spec:
groups:
- name: kube-state-metrics
rules:
- alert: KubernetesNodeReady
expr: kube_node_status_condition{condition="Ready",status="true"} == 0
for: 10m
labels:
severity: critical
annotations:
summary: Kubernetes Node ready (instance {{ $labels.instance }})
description: "Node {{ $labels.node }} has been unready for a long time\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- alert: KubernetesMemoryPressure
expr: kube_node_status_condition{condition="MemoryPressure",status="true"} == 1
for: 2m
labels:
severity: critical
annotations:
summary: Kubernetes memory pressure (instance {{ $labels.instance }})
description: "{{ $labels.node }} has MemoryPressure condition\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- alert: KubernetesDiskPressure
expr: kube_node_status_condition{condition="DiskPressure",status="true"} == 1
for: 2m
labels:
severity: critical
annotations:
summary: Kubernetes disk pressure (instance {{ $labels.instance }})
description: "{{ $labels.node }} has DiskPressure condition\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- alert: KubernetesOutOfDisk
expr: kube_node_status_condition{condition="OutOfDisk",status="true"} == 1
for: 2m
labels:
severity: critical
annotations:
summary: Kubernetes out of disk (instance {{ $labels.instance }})
description: "{{ $labels.node }} has OutOfDisk condition\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- alert: KubernetesOutOfCapacity
expr: sum by (node) ((kube_pod_status_phase{phase="Running"} == 1) + on(uid) group_left(node) (0 * kube_pod_info{pod_template_hash=""})) / sum by (node) (kube_node_status_allocatable{resource="pods"}) * 100 > 90
for: 2m
labels:
severity: warning
annotations:
summary: Kubernetes out of capacity (instance {{ $labels.instance }})
description: "{{ $labels.node }} is out of capacity\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- alert: KubernetesContainerOomKiller
expr: (kube_pod_container_status_restarts_total - kube_pod_container_status_restarts_total offset 10m >= 1) and ignoring (reason) min_over_time(kube_pod_container_status_last_terminated_reason{reason="OOMKilled"}[10m]) == 1
for: 0m
labels:
severity: warning
annotations:
summary: Kubernetes container oom killer (instance {{ $labels.instance }})
description: "Container {{ $labels.container }} in pod {{ $labels.namespace }}/{{ $labels.pod }} has been OOMKilled {{ $value }} times in the last 10 minutes.\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- alert: KubernetesJobFailed
expr: kube_job_status_failed > 0
for: 0m
labels:
severity: warning
annotations:
summary: Kubernetes Job failed (instance {{ $labels.instance }})
description: "Job {{$labels.namespace}}/{{$labels.exported_job}} failed to complete\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- alert: KubernetesCronjobSuspended
expr: kube_cronjob_spec_suspend != 0
for: 0m
labels:
severity: warning
annotations:
summary: Kubernetes CronJob suspended (instance {{ $labels.instance }})
description: "CronJob {{ $labels.namespace }}/{{ $labels.cronjob }} is suspended\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- alert: KubernetesPersistentvolumeclaimPending
expr: kube_persistentvolumeclaim_status_phase{phase="Pending"} == 1
for: 2m
labels:
severity: warning
annotations:
summary: Kubernetes PersistentVolumeClaim pending (instance {{ $labels.instance }})
description: "PersistentVolumeClaim {{ $labels.namespace }}/{{ $labels.persistentvolumeclaim }} is pending\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- alert: KubernetesVolumeOutOfDiskSpace
expr: kubelet_volume_stats_available_bytes / kubelet_volume_stats_capacity_bytes * 100 < 10
for: 2m
labels:
severity: warning
annotations:
summary: Kubernetes Volume out of disk space (instance {{ $labels.instance }})
description: "Volume is almost full (< 10% left)\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- alert: KubernetesVolumeFullInFourDays
expr: predict_linear(kubelet_volume_stats_available_bytes[6h], 4 * 24 * 3600) < 0
for: 0m
labels:
severity: critical
annotations:
summary: Kubernetes Volume full in four days (instance {{ $labels.instance }})
description: "{{ $labels.namespace }}/{{ $labels.persistentvolumeclaim }} is expected to fill up within four days. Currently {{ $value | humanize }}% is available.\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- alert: KubernetesPersistentvolumeError
expr: kube_persistentvolume_status_phase{phase=~"Failed|Pending", job="kube-state-metrics"} > 0
for: 0m
labels:
severity: critical
annotations:
summary: Kubernetes PersistentVolume error (instance {{ $labels.instance }})
description: "Persistent volume is in bad state\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- alert: KubernetesStatefulsetDown
expr: (kube_statefulset_status_replicas_ready / kube_statefulset_status_replicas_current) != 1
for: 1m
labels:
severity: critical
annotations:
summary: Kubernetes StatefulSet down (instance {{ $labels.instance }})
description: "A StatefulSet went down\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- alert: KubernetesHpaScalingAbility
expr: kube_horizontalpodautoscaler_status_condition{status="false", condition="AbleToScale"} == 1
for: 2m
labels:
severity: warning
annotations:
summary: Kubernetes HPA scaling ability (instance {{ $labels.instance }})
description: "Pod is unable to scale\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- alert: KubernetesHpaMetricAvailability
expr: kube_horizontalpodautoscaler_status_condition{status="false", condition="ScalingActive"} == 1
for: 0m
labels:
severity: warning
annotations:
summary: Kubernetes HPA metric availability (instance {{ $labels.instance }})
description: "HPA is not able to collect metrics\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- alert: KubernetesHpaScaleCapability
expr: kube_horizontalpodautoscaler_status_desired_replicas >= kube_horizontalpodautoscaler_spec_max_replicas
for: 2m
labels:
severity: info
annotations:
summary: Kubernetes HPA scale capability (instance {{ $labels.instance }})
description: "The maximum number of desired Pods has been hit\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- alert: KubernetesPodNotHealthy
expr: min_over_time(sum by (namespace, pod) (kube_pod_status_phase{phase=~"Pending|Unknown|Failed"})[15m:1m]) > 0
for: 0m
labels:
severity: critical
annotations:
summary: Kubernetes Pod not healthy (instance {{ $labels.instance }})
description: "Pod has been in a non-ready state for longer than 15 minutes.\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- alert: KubernetesPodCrashLooping
expr: increase(kube_pod_container_status_restarts_total[1m]) > 3
for: 2m
labels:
severity: warning
annotations:
summary: Kubernetes pod crash looping (instance {{ $labels.instance }})
description: "Pod {{ $labels.pod }} is crash looping\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- alert: KubernetesReplicassetMismatch
expr: kube_replicaset_spec_replicas != kube_replicaset_status_ready_replicas
for: 10m
labels:
severity: warning
annotations:
summary: Kubernetes ReplicasSet mismatch (instance {{ $labels.instance }})
description: "Deployment Replicas mismatch\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- alert: KubernetesDeploymentReplicasMismatch
expr: kube_deployment_spec_replicas != kube_deployment_status_replicas_available
for: 10m
labels:
severity: warning
annotations:
summary: Kubernetes Deployment replicas mismatch (instance {{ $labels.instance }})
description: "Deployment Replicas mismatch\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- alert: KubernetesStatefulsetReplicasMismatch
expr: kube_statefulset_status_replicas_ready != kube_statefulset_status_replicas
for: 10m
labels:
severity: warning
annotations:
summary: Kubernetes StatefulSet replicas mismatch (instance {{ $labels.instance }})
description: "A StatefulSet does not match the expected number of replicas.\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- alert: KubernetesDeploymentGenerationMismatch
expr: kube_deployment_status_observed_generation != kube_deployment_metadata_generation
for: 10m
labels:
severity: critical
annotations:
summary: Kubernetes Deployment generation mismatch (instance {{ $labels.instance }})
description: "A Deployment has failed but has not been rolled back.\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- alert: KubernetesStatefulsetGenerationMismatch
expr: kube_statefulset_status_observed_generation != kube_statefulset_metadata_generation
for: 10m
labels:
severity: critical
annotations:
summary: Kubernetes StatefulSet generation mismatch (instance {{ $labels.instance }})
description: "A StatefulSet has failed but has not been rolled back.\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- alert: KubernetesStatefulsetUpdateNotRolledOut
expr: max without (revision) (kube_statefulset_status_current_revision unless kube_statefulset_status_update_revision) * (kube_statefulset_replicas != kube_statefulset_status_replicas_updated)
for: 10m
labels:
severity: warning
annotations:
summary: Kubernetes StatefulSet update not rolled out (instance {{ $labels.instance }})
description: "StatefulSet update has not been rolled out.\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- alert: KubernetesDaemonsetRolloutStuck
expr: kube_daemonset_status_number_ready / kube_daemonset_status_desired_number_scheduled * 100 < 100 or kube_daemonset_status_desired_number_scheduled - kube_daemonset_status_current_number_scheduled > 0
for: 10m
labels:
severity: warning
annotations:
summary: Kubernetes DaemonSet rollout stuck (instance {{ $labels.instance }})
description: "Some Pods of DaemonSet are not scheduled or not ready\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- alert: KubernetesDaemonsetMisscheduled
expr: kube_daemonset_status_number_misscheduled > 0
for: 1m
labels:
severity: critical
annotations:
summary: Kubernetes DaemonSet misscheduled (instance {{ $labels.instance }})
description: "Some DaemonSet Pods are running where they are not supposed to run\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- alert: KubernetesCronjobTooLong
expr: time() - kube_cronjob_next_schedule_time > 3600
for: 0m
labels:
severity: warning
annotations:
summary: Kubernetes CronJob too long (instance {{ $labels.instance }})
description: "CronJob {{ $labels.namespace }}/{{ $labels.cronjob }} is taking more than 1h to complete.\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- alert: KubernetesJobSlowCompletion
expr: kube_job_spec_completions - kube_job_status_succeeded > 0
for: 12h
labels:
severity: critical
annotations:
summary: Kubernetes job slow completion (instance {{ $labels.instance }})
description: "Kubernetes Job {{ $labels.namespace }}/{{ $labels.job_name }} did not complete in time.\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- alert: KubernetesApiServerErrors
expr: sum(rate(apiserver_request_total{job="apiserver",code=~"^(?:5..)$"}[1m])) / sum(rate(apiserver_request_total{job="apiserver"}[1m])) * 100 > 3
for: 2m
labels:
severity: critical
annotations:
summary: Kubernetes API server errors (instance {{ $labels.instance }})
description: "Kubernetes API server is experiencing high error rate\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- alert: KubernetesApiClientErrors
expr: (sum(rate(rest_client_requests_total{code=~"(4|5).."}[1m])) by (instance, job) / sum(rate(rest_client_requests_total[1m])) by (instance, job)) * 100 > 1
for: 2m
labels:
severity: critical
annotations:
summary: Kubernetes API client errors (instance {{ $labels.instance }})
description: "Kubernetes API client is experiencing high error rate\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- alert: KubernetesClientCertificateExpiresNextWeek
expr: apiserver_client_certificate_expiration_seconds_count{job="apiserver"} > 0 and histogram_quantile(0.01, sum by (job, le) (rate(apiserver_client_certificate_expiration_seconds_bucket{job="apiserver"}[5m]))) < 7*24*60*60
for: 0m
labels:
severity: warning
annotations:
summary: Kubernetes client certificate expires next week (instance {{ $labels.instance }})
description: "A client certificate used to authenticate to the apiserver is expiring next week.\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- alert: KubernetesClientCertificateExpiresSoon
expr: apiserver_client_certificate_expiration_seconds_count{job="apiserver"} > 0 and histogram_quantile(0.01, sum by (job, le) (rate(apiserver_client_certificate_expiration_seconds_bucket{job="apiserver"}[5m]))) < 24*60*60
for: 0m
labels:
severity: critical
annotations:
summary: Kubernetes client certificate expires soon (instance {{ $labels.instance }})
description: "A client certificate used to authenticate to the apiserver is expiring in less than 24.0 hours.\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- alert: KubernetesApiServerLatency
expr: histogram_quantile(0.99, sum(rate(apiserver_request_latencies_bucket{subresource!="log",verb!~"^(?:CONNECT|WATCHLIST|WATCH|PROXY)$"} [10m])) WITHOUT (instance, resource)) / 1e+06 > 1
for: 2m
labels:
severity: warning
annotations:
summary: Kubernetes API server latency (instance {{ $labels.instance }})
description: "Kubernetes API server has a 99th percentile latency of {{ $value }} seconds for {{ $labels.verb }} {{ $labels.resource }}.\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"

View File

@ -156,7 +156,7 @@ metadata:
name: blackbox-exporter
spec:
revisionHistoryLimit: 0
replicas: 2
replicas: 3
selector:
matchLabels:
app: blackbox-exporter

File diff suppressed because it is too large Load Diff

View File

@ -87,7 +87,13 @@ spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- topologyKey: "kubernetes.io/hostname"
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- mikrotik-exporter
topologyKey: "kubernetes.io/hostname"
---
kind: Service
apiVersion: v1

View File

@ -4,11 +4,13 @@ kind: Probe
metadata:
name: nodes-proxmox
spec:
scrapeTimeout: 30s
targets:
staticConfig:
static:
- nas.mgmt.k-space.ee:9100
- pve1.proxmox.infra.k-space.ee:9100
- pve2.proxmox.infra.k-space.ee:9100
- pve8.proxmox.infra.k-space.ee:9100
- pve9.proxmox.infra.k-space.ee:9100
relabelingConfigs:
@ -86,37 +88,37 @@ spec:
summary: Host memory under memory pressure (instance {{ $labels.instance }})
description: The node is under heavy memory pressure. High rate of major page faults
- alert: HostUnusualNetworkThroughputIn
expr: sum by (instance) (rate(node_network_receive_bytes_total[2m])) > 160e+06
expr: sum by (instance) (rate(node_network_receive_bytes_total[2m])) > 800e+06
for: 1h
labels:
severity: warning
annotations:
summary: Host unusual network throughput in (instance {{ $labels.instance }})
description: Host network interfaces are probably receiving too much data (> 160 MB/s)
description: Host network interfaces are probably receiving too much data (> 800 MB/s)
- alert: HostUnusualNetworkThroughputOut
expr: sum by (instance) (rate(node_network_transmit_bytes_total[2m])) > 160e+06
expr: sum by (instance) (rate(node_network_transmit_bytes_total[2m])) > 800e+06
for: 1h
labels:
severity: warning
annotations:
summary: Host unusual network throughput out (instance {{ $labels.instance }})
description: Host network interfaces are probably sending too much data (> 160 MB/s)
description: Host network interfaces are probably sending too much data (> 800 MB/s)
- alert: HostUnusualDiskReadRate
expr: sum by (instance) (rate(node_disk_read_bytes_total[2m])) > 50000000
expr: sum by (instance) (rate(node_disk_read_bytes_total[2m])) > 500e+06
for: 1h
labels:
severity: warning
annotations:
summary: Host unusual disk read rate (instance {{ $labels.instance }})
description: Disk is probably reading too much data (> 50 MB/s)
description: Disk is probably reading too much data (> 500 MB/s)
- alert: HostUnusualDiskWriteRate
expr: sum by (instance) (rate(node_disk_written_bytes_total[2m])) > 50000000
expr: sum by (instance) (rate(node_disk_written_bytes_total[2m])) > 500e+06
for: 1h
labels:
severity: warning
annotations:
summary: Host unusual disk write rate (instance {{ $labels.instance }})
description: Disk is probably writing too much data (> 50 MB/s)
description: Disk is probably writing too much data (> 500 MB/s)
# Please add ignored mountpoints in node_exporter parameters like
# "--collector.filesystem.ignored-mount-points=^/(sys|proc|dev|run)($|/)".
# Same rule using "node_filesystem_free_bytes" will fire when disk fills for non-root users.
@ -361,12 +363,16 @@ kind: PodMonitor
metadata:
name: node-exporter
spec:
selector:
matchLabels:
app: node-exporter
podMetricsEndpoints:
- port: web
scrapeTimeout: 30s
relabelings:
- sourceLabels: [__meta_kubernetes_pod_node_name]
targetLabel: node
---
apiVersion: v1
kind: ServiceAccount
@ -400,9 +406,10 @@ spec:
- --path.rootfs=/host/root
- --no-collector.wifi
- --collector.filesystem.mount-points-exclude=^/(dev|proc|sys|var/lib/docker/.+|var/lib/kubelet/pods/.+)($|/)
- --collector.netclass.ignored-devices=^(veth.*|[a-f0-9]{15})$
- --collector.netdev.device-exclude=^(veth.*|[a-f0-9]{15})$
image: prom/node-exporter:v1.3.1
- --collector.netclass.ignored-devices=^(veth|cali|vxlan|cni|vnet|tap|lo|wg)
- --collector.netdev.device-exclude=^(veth|cali|vxlan|cni|vnet|tap|lo|wg)
- --collector.diskstats.ignored-devices=^(sr[0-9][0-9]*)$
image: prom/node-exporter:v1.5.0
resources:
limits:
cpu: 50m
@ -429,6 +436,7 @@ spec:
readOnlyRootFilesystem: true
hostNetwork: true
hostPID: true
priorityClassName: system-node-critical
securityContext:
runAsNonRoot: true
runAsUser: 65534

View File

@ -79,12 +79,15 @@ spec:
prober:
url: snmp-exporter:9116
path: /snmp
metricRelabelings:
- sourceLabels: [__name__]
regex: '(.*)'
replacement: 'snmp_${1}'
targetLabel: __name__
targets:
staticConfig:
static:
- ups-4.mgmt.k-space.ee
- ups-5.mgmt.k-space.ee
- ups-6.mgmt.k-space.ee
- ups-7.mgmt.k-space.ee
- ups-8.mgmt.k-space.ee
- ups-9.mgmt.k-space.ee
@ -108,7 +111,7 @@ spec:
annotations:
summary: One or more UPS-es is not in normal operation mode. This either means
power is lost or UPS was loaded and it's now in bypass mode.
expr: sum(snmp_upsOutputSource { upsOutputSource = 'normal' }) < 6
expr: sum(snmp_upsOutputSource { upsOutputSource = 'normal' }) != 4
for: 1m
labels:
severity: critical
@ -132,6 +135,11 @@ spec:
prober:
url: snmp-exporter:9116
path: /snmp
metricRelabelings:
- sourceLabels: [__name__]
regex: '(.*)'
replacement: 'snmp_${1}'
targetLabel: __name__
targets:
staticConfig:
static:
@ -166,6 +174,11 @@ spec:
prober:
url: snmp-exporter:9116
path: /snmp
metricRelabelings:
- sourceLabels: [__name__]
regex: '(.*)'
replacement: 'snmp_${1}'
targetLabel: __name__
targets:
staticConfig:
static:

View File

@ -33,6 +33,7 @@ epson_beamer:
type: gauge
printer_mib:
version: 1
walk:
- 1.3.6.1.2.1.25.3.5.1.1
- 1.3.6.1.2.1.43.11.1.1.5

55
storage-class.yaml Normal file
View File

@ -0,0 +1,55 @@
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: mongo
provisioner: rawfile.csi.openebs.io
reclaimPolicy: Retain
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true
parameters:
fsType: "xfs"
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: minio
provisioner: rawfile.csi.openebs.io
reclaimPolicy: Retain
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true
parameters:
fsType: "xfs"
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: prometheus
provisioner: rawfile.csi.openebs.io
reclaimPolicy: Retain
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true
parameters:
fsType: "xfs"
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: postgres
provisioner: rawfile.csi.openebs.io
reclaimPolicy: Retain
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true
parameters:
fsType: "xfs"
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: mysql
provisioner: rawfile.csi.openebs.io
reclaimPolicy: Retain
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true
parameters:
fsType: "xfs"

View File

@ -5,5 +5,6 @@ Calico implements the inter-pod overlay network
```
curl https://projectcalico.docs.tigera.io/manifests/tigera-operator.yaml -O
curl https://projectcalico.docs.tigera.io/manifests/custom-resources.yaml -O
kubectl apply -f tigera-operator.yaml -f custom-resources.yaml
kubectl apply -f custom-resources.yaml
kubectl replace -f tigera-operator.yaml
```

View File

@ -1,64 +0,0 @@
#!/bin/bash
NAMESPACE=${NAMESPACE:-longhorn-system}
remove_and_wait() {
local crd=$1
out=`kubectl -n ${NAMESPACE} delete $crd --all 2>&1`
if [ $? -ne 0 ]; then
echo $out
return
fi
while true; do
out=`kubectl -n ${NAMESPACE} get $crd -o yaml | grep 'items: \[\]'`
if [ $? -eq 0 ]; then
break
fi
sleep 1
done
echo all $crd instances deleted
}
remove_crd_instances() {
remove_and_wait volumes.longhorn.rancher.io
# TODO: remove engines and replicas once we fix https://github.com/rancher/longhorn/issues/273
remove_and_wait engines.longhorn.rancher.io
remove_and_wait replicas.longhorn.rancher.io
remove_and_wait engineimages.longhorn.rancher.io
remove_and_wait settings.longhorn.rancher.io
# do this one last; manager crashes
remove_and_wait nodes.longhorn.rancher.io
}
# Delete driver related workloads in specific order
remove_driver() {
kubectl -n ${NAMESPACE} delete deployment.apps/longhorn-driver-deployer
kubectl -n ${NAMESPACE} delete daemonset.apps/longhorn-csi-plugin
kubectl -n ${NAMESPACE} delete statefulset.apps/csi-attacher
kubectl -n ${NAMESPACE} delete service/csi-attacher
kubectl -n ${NAMESPACE} delete statefulset.apps/csi-provisioner
kubectl -n ${NAMESPACE} delete service/csi-provisioner
kubectl -n ${NAMESPACE} delete daemonset.apps/longhorn-flexvolume-driver
}
# Delete all workloads in the namespace
remove_workloads() {
kubectl -n ${NAMESPACE} get daemonset.apps -o yaml | kubectl delete -f -
kubectl -n ${NAMESPACE} get deployment.apps -o yaml | kubectl delete -f -
kubectl -n ${NAMESPACE} get replicaset.apps -o yaml | kubectl delete -f -
kubectl -n ${NAMESPACE} get statefulset.apps -o yaml | kubectl delete -f -
kubectl -n ${NAMESPACE} get pods -o yaml | kubectl delete -f -
kubectl -n ${NAMESPACE} get service -o yaml | kubectl delete -f -
}
# Delete CRD definitions with longhorn.rancher.io in the name
remove_crds() {
for crd in $(kubectl get crd -o jsonpath={.items[*].metadata.name} | tr ' ' '\n' | grep longhorn.rancher.io); do
kubectl delete crd/$crd
done
}
remove_crd_instances
remove_driver
remove_workloads
remove_crds

View File

@ -1,5 +1,5 @@
# This section includes base Calico installation configuration.
# For more information, see: https://projectcalico.docs.tigera.io/v3.23/reference/installation/api#operator.tigera.io/v1.Installation
# For more information, see: https://projectcalico.docs.tigera.io/master/reference/installation/api#operator.tigera.io/v1.Installation
apiVersion: operator.tigera.io/v1
kind: Installation
metadata:
@ -10,7 +10,7 @@ spec:
# Note: The ipPools section cannot be modified post-install.
ipPools:
- blockSize: 26
cidr: 192.168.0.0/16
cidr: 10.244.0.0/16
encapsulation: VXLANCrossSubnet
natOutgoing: Enabled
nodeSelector: all()
@ -18,7 +18,7 @@ spec:
---
# This section configures the Calico API server.
# For more information, see: https://projectcalico.docs.tigera.io/v3.23/reference/installation/api#operator.tigera.io/v1.APIServer
# For more information, see: https://projectcalico.docs.tigera.io/master/reference/installation/api#operator.tigera.io/v1.APIServer
apiVersion: operator.tigera.io/v1
kind: APIServer
metadata:

File diff suppressed because it is too large Load Diff

View File

@ -64,8 +64,16 @@ spec:
number: 9000
tls:
- hosts:
- traefik.k-space.ee
secretName: traefik-tls
- "*.k-space.ee"
secretName: wildcard-tls
---
apiVersion: traefik.containo.us/v1alpha1
kind: TLSStore
metadata:
name: default
spec:
defaultCertificate:
secretName: wildcard-tls
---
apiVersion: traefik.containo.us/v1alpha1
kind: Middleware

View File

@ -104,7 +104,6 @@ metadata:
name: pve
annotations:
kubernetes.io/ingress.class: traefik
cert-manager.io/cluster-issuer: default
external-dns.alpha.kubernetes.io/target: traefik.k-space.ee
traefik.ingress.kubernetes.io/router.entrypoints: websecure
traefik.ingress.kubernetes.io/router.middlewares: traefik-sso@kubernetescrd,traefik-proxmox-redirect@kubernetescrd
@ -147,9 +146,7 @@ spec:
number: 8006
tls:
- hosts:
- pve.k-space.ee
- proxmox.k-space.ee
secretName: pve-tls
- "*.k-space.ee"
---
apiVersion: traefik.containo.us/v1alpha1
kind: Middleware

View File

@ -1,12 +1,16 @@
image:
tag: "2.8"
tag: "2.9"
websecure:
tls:
enabled: true
providers:
kubernetesCRD:
enabled: true
kubernetesIngress:
allowEmptyServices: true
allowExternalNameServices: true
deployment:

View File

@ -17,7 +17,6 @@ metadata:
name: voron
annotations:
kubernetes.io/ingress.class: traefik
cert-manager.io/cluster-issuer: default
traefik.ingress.kubernetes.io/router.entrypoints: websecure
traefik.ingress.kubernetes.io/router.middlewares: traefik-sso@kubernetescrd
traefik.ingress.kubernetes.io/router.tls: "true"
@ -36,5 +35,4 @@ spec:
name: http
tls:
- hosts:
- voron.k-space.ee
secretName: voron-tls
- "*.k-space.ee"

View File

@ -41,7 +41,6 @@ kind: Ingress
metadata:
name: whoami
annotations:
cert-manager.io/cluster-issuer: default
external-dns.alpha.kubernetes.io/target: traefik.k-space.ee
kubernetes.io/ingress.class: traefik
traefik.ingress.kubernetes.io/router.entrypoints: websecure
@ -50,8 +49,7 @@ metadata:
spec:
tls:
- hosts:
- "whoami.k-space.ee"
secretName: whoami-tls
- "*.k-space.ee"
rules:
- host: "whoami.k-space.ee"
http:

View File

@ -104,7 +104,6 @@ metadata:
namespace: wildduck
annotations:
kubernetes.io/ingress.class: traefik
cert-manager.io/cluster-issuer: default
traefik.ingress.kubernetes.io/router.entrypoints: websecure
traefik.ingress.kubernetes.io/router.middlewares: traefik-sso@kubernetescrd
traefik.ingress.kubernetes.io/router.tls: "true"
@ -123,8 +122,7 @@ spec:
number: 80
tls:
- hosts:
- webmail.k-space.ee
secretName: webmail-tls
- "*.k-space.ee"
---
apiVersion: codemowers.io/v1alpha1
kind: KeyDBCluster