forked from k-space/kube
Compare commits
1 Commits
master
...
alertmanag
Author | SHA1 | Date | |
---|---|---|---|
a11a43c757 |
4
.gitignore
vendored
4
.gitignore
vendored
@ -3,7 +3,3 @@
|
||||
*.swp
|
||||
*.save
|
||||
*.1
|
||||
|
||||
### IntelliJ IDEA ###
|
||||
.idea
|
||||
*.iml
|
||||
|
143
README.md
143
README.md
@ -23,7 +23,6 @@ Most endpoints are protected by OIDC autentication or Authelia SSO middleware.
|
||||
|
||||
General discussion is happening in the `#kube` Slack channel.
|
||||
|
||||
<details><summary>Bootstrapping access</summary>
|
||||
For bootstrap access obtain `/etc/kubernetes/admin.conf` from one of the master
|
||||
nodes and place it under `~/.kube/config` on your machine.
|
||||
|
||||
@ -47,9 +46,9 @@ EOF
|
||||
sudo systemctl daemon-reload
|
||||
systemctl restart kubelet
|
||||
```
|
||||
</details>
|
||||
|
||||
The following can be used to talk to the Kubernetes cluster using OIDC credentials:
|
||||
Afterwards following can be used to talk to the Kubernetes cluster using
|
||||
OIDC credentials:
|
||||
|
||||
```bash
|
||||
kubectl krew install oidc-login
|
||||
@ -90,41 +89,28 @@ EOF
|
||||
|
||||
For access control mapping see [cluster-role-bindings.yml](cluster-role-bindings.yml)
|
||||
|
||||
### systemd-resolved issues on access
|
||||
```sh
|
||||
Unable to connect to the server: dial tcp: lookup master.kube.k-space.ee on 127.0.0.53:53: no such host
|
||||
```
|
||||
```
|
||||
Network → VPN → `IPv4` → Other nameservers (Muud nimeserverid): `172.21.0.1`
|
||||
Network → VPN → `IPv6` → Other nameservers (Muud nimeserverid): `2001:bb8:4008:21::1`
|
||||
Network → VPN → `IPv4` → Search domains (Otsingudomeenid): `kube.k-space.ee`
|
||||
Network → VPN → `IPv6` → Search domains (Otsingudomeenid): `kube.k-space.ee`
|
||||
```
|
||||
|
||||
# Technology mapping
|
||||
|
||||
Our self-hosted Kubernetes stack compared to AWS based deployments:
|
||||
|
||||
| Hipster startup | Self-hosted hackerspace | Purpose |
|
||||
|-------------------|-------------------------------------|---------------------------------------------------------------------|
|
||||
| AWS ALB | Traefik | Reverse proxy also known as ingress controller in Kubernetes jargon |
|
||||
| AWS AMP | Prometheus Operator | Monitoring and alerting |
|
||||
| AWS CloudTrail | ECK Operator | Log aggregation |
|
||||
| AWS DocumentDB | MongoDB Community Operator | Highly available NoSQL database |
|
||||
| AWS EBS | Longhorn | Block storage for arbitrary applications needing persistent storage |
|
||||
| AWS EC2 | Proxmox | Virtualization layer |
|
||||
| AWS ECR | Harbor | Docker registry |
|
||||
| AWS EKS | kubeadm | Provision Kubernetes master nodes |
|
||||
| AWS NLB | MetalLB | L2/L3 level load balancing |
|
||||
| AWS RDS for MySQL | MySQL Operator | Provision highly available relational databases |
|
||||
| AWS Route53 | Bind and RFC2136 | DNS records and Let's Encrypt DNS validation |
|
||||
| AWS S3 | Minio Operator | Highly available object storage |
|
||||
| AWS VPC | Calico | Overlay network |
|
||||
| Dex | Authelia | ACL mapping and OIDC provider which integrates with GitHub/Samba |
|
||||
| GitHub Actions | Drone | Build Docker images |
|
||||
| GitHub | Gitea | Source code management, issue tracking |
|
||||
| GitHub OAuth2 | Samba (Active Directory compatible) | Source of truth for authentication and authorization |
|
||||
| Gmail | Wildduck | E-mail |
|
||||
| Hipster startup | Self-hosted hackerspace | Purpose |
|
||||
|-----------------|-------------------------------------|---------------------------------------------------------------------|
|
||||
| AWS EC2 | Proxmox | Virtualization layer |
|
||||
| AWS EKS | kubeadm | Provision Kubernetes master nodes |
|
||||
| AWS EBS | Longhorn | Block storage for arbitrary applications needing persistent storage |
|
||||
| AWS NLB | MetalLB | L2/L3 level load balancing |
|
||||
| AWS ALB | Traefik | Reverse proxy also known as ingress controller in Kubernetes jargon |
|
||||
| AWS ECR | Harbor | Docker registry |
|
||||
| AWS DocumentDB | MongoDB | NoSQL database |
|
||||
| AWS S3 | Minio | Object storage |
|
||||
| GitHub OAuth2 | Samba (Active Directory compatible) | Source of truth for authentication and authorization |
|
||||
| Dex | Authelia | ACL mapping and OIDC provider which integrates with GitHub/Samba |
|
||||
| GitHub | Gitea | Source code management, issue tracking |
|
||||
| GitHub Actions | Drone | Build Docker images |
|
||||
| Gmail | Wildduck | E-mail |
|
||||
| AWS Route53 | Bind and RFC2136 | DNS records and Let's Encrypt DNS validation |
|
||||
| AWS VPC | Calico | Overlay network |
|
||||
|
||||
|
||||
External dependencies running as classic virtual machines:
|
||||
@ -155,8 +141,7 @@ these should be handled by `tls:` section in Ingress.
|
||||
|
||||
## Cluster formation
|
||||
|
||||
Created Ubuntu 22.04 VM-s on Proxmox with local storage.
|
||||
Added some ARM64 workers by using Ubuntu 22.04 server on Raspberry Pi.
|
||||
Create Ubuntu 20.04 VM-s on Proxmox with local storage.
|
||||
|
||||
After machines have booted up and you can reach them via SSH:
|
||||
|
||||
@ -174,13 +159,6 @@ net.ipv4.conf.all.accept_redirects = 0
|
||||
net.bridge.bridge-nf-call-iptables = 1
|
||||
net.ipv4.ip_forward = 1
|
||||
net.bridge.bridge-nf-call-ip6tables = 1
|
||||
|
||||
# Elasticsearch needs this
|
||||
vm.max_map_count = 524288
|
||||
|
||||
# Bump inotify limits to make sure
|
||||
fs.inotify.max_user_instances=1280
|
||||
fs.inotify.max_user_watches=655360
|
||||
EOF
|
||||
sysctl --system
|
||||
|
||||
@ -194,23 +172,32 @@ nameserver 8.8.8.8
|
||||
EOF
|
||||
|
||||
# Disable multipathd as Longhorn handles that itself
|
||||
systemctl mask multipathd snapd
|
||||
systemctl disable --now multipathd snapd bluetooth ModemManager hciuart wpa_supplicant packagekit
|
||||
systemctl mask multipathd
|
||||
systemctl disable multipathd
|
||||
systemctl stop multipathd
|
||||
|
||||
# Disable Snapcraft
|
||||
systemctl mask snapd
|
||||
systemctl disable snapd
|
||||
systemctl stop snapd
|
||||
|
||||
# Permit root login
|
||||
sed -i -e 's/PermitRootLogin no/PermitRootLogin without-password/' /etc/ssh/sshd_config
|
||||
systemctl reload ssh
|
||||
cat ~ubuntu/.ssh/authorized_keys > /root/.ssh/authorized_keys
|
||||
cat << EOF > /root/.ssh/authorized_keys
|
||||
sk-ecdsa-sha2-nistp256@openssh.com AAAAInNrLWVjZHNhLXNoYTItbmlzdHAyNTZAb3BlbnNzaC5jb20AAAAIbmlzdHAyNTYAAABBBD4/e9SWYWYoNZMkkF+NirhbmHuUgjoCap42kAq0pLIXFwIqgVTCre03VPoChIwBClc8RspLKqr5W3j0fG8QwnQAAAAEc3NoOg== lauri@lauri-x13
|
||||
EOF
|
||||
userdel -f ubuntu
|
||||
apt-get install -yqq linux-image-generic
|
||||
apt-get remove -yq cloud-init linux-image-*-kvm
|
||||
apt-get remove -yq cloud-init
|
||||
|
||||
|
||||
```
|
||||
|
||||
Install packages:
|
||||
Install packages, for Raspbian set `OS=Debian_11`
|
||||
|
||||
```bash
|
||||
OS=xUbuntu_22.04
|
||||
VERSION=1.24
|
||||
OS=xUbuntu_20.04
|
||||
VERSION=1.23
|
||||
cat <<EOF | sudo tee /etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list
|
||||
deb https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/ /
|
||||
EOF
|
||||
@ -218,26 +205,17 @@ cat <<EOF | sudo tee /etc/apt/sources.list.d/devel:kubic:libcontainers:stable:cr
|
||||
deb http://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable:/cri-o:/$VERSION/$OS/ /
|
||||
EOF
|
||||
|
||||
rm -fv /etc/apt/trusted.gpg
|
||||
curl -L https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/Release.key | gpg --dearmor > /etc/apt/trusted.gpg.d/libcontainers.gpg
|
||||
curl -L https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable:cri-o:$VERSION/$OS/Release.key | gpg --dearmor > /etc/apt/trusted.gpg.d/libcontainers-cri-o.gpg
|
||||
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg > /etc/apt/trusted.gpg.d/packages-cloud-google.gpg
|
||||
curl -L https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/Release.key | sudo apt-key --keyring /etc/apt/trusted.gpg.d/libcontainers.gpg add -
|
||||
curl -L https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable:cri-o:$VERSION/$OS/Release.key | sudo apt-key --keyring /etc/apt/trusted.gpg.d/libcontainers-cri-o.gpg add -
|
||||
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
|
||||
echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" > /etc/apt/sources.list.d/kubernetes.list
|
||||
|
||||
apt-get update
|
||||
apt-get install -yqq --allow-change-held-packages apt-transport-https curl cri-o cri-o-runc kubelet=1.24.10-00 kubectl=1.24.10-00 kubeadm=1.24.10-00
|
||||
|
||||
cat << \EOF > /etc/containers/registries.conf
|
||||
unqualified-search-registries = ["docker.io"]
|
||||
# To pull Docker images from a mirror uncomment following
|
||||
#[[registry]]
|
||||
#prefix = "docker.io"
|
||||
#location = "mirror.gcr.io"
|
||||
EOF
|
||||
sudo systemctl restart crio
|
||||
apt-get install -yqq apt-transport-https curl cri-o cri-o-runc kubelet=1.23.5-00 kubectl=1.23.5-00 kubeadm=1.23.5-00
|
||||
sudo systemctl daemon-reload
|
||||
sudo systemctl enable crio --now
|
||||
apt-mark hold kubelet kubeadm kubectl
|
||||
sed -i -e 's/unqualified-search-registries = .*/unqualified-search-registries = ["docker.io"]/' /etc/containers/registries.conf
|
||||
```
|
||||
|
||||
On master:
|
||||
@ -248,16 +226,6 @@ kubeadm init --token-ttl=120m --pod-network-cidr=10.244.0.0/16 --control-plane-e
|
||||
|
||||
For the `kubeadm join` command specify FQDN via `--node-name $(hostname -f)`.
|
||||
|
||||
Set AZ labels:
|
||||
|
||||
```
|
||||
for j in $(seq 1 9); do
|
||||
for t in master mon worker storage; do
|
||||
kubectl label nodes ${t}${j}.kube.k-space.ee topology.kubernetes.io/zone=node${j}
|
||||
done
|
||||
done
|
||||
```
|
||||
|
||||
After forming the cluster add taints:
|
||||
|
||||
```bash
|
||||
@ -265,7 +233,7 @@ for j in $(seq 1 9); do
|
||||
kubectl label nodes worker${j}.kube.k-space.ee node-role.kubernetes.io/worker=''
|
||||
done
|
||||
|
||||
for j in $(seq 1 4); do
|
||||
for j in $(seq 1 3); do
|
||||
kubectl taint nodes mon${j}.kube.k-space.ee dedicated=monitoring:NoSchedule
|
||||
kubectl label nodes mon${j}.kube.k-space.ee dedicated=monitoring
|
||||
done
|
||||
@ -276,26 +244,15 @@ for j in $(seq 1 4); do
|
||||
done
|
||||
```
|
||||
|
||||
On Raspberry Pi you need to take additonal steps:
|
||||
|
||||
* Manually enable cgroups by appending
|
||||
`cgroup_memory=1 cgroup_enable=memory` to `/boot/cmdline.txt`,
|
||||
* Disable swap with `swapoff -a; apt-get purge -y dphys-swapfile`
|
||||
* For mounting Longhorn volumes on Rasbian install `open-iscsi`
|
||||
|
||||
For `arm64` nodes add suitable taint to prevent scheduling non-multiarch images on them:
|
||||
|
||||
```bash
|
||||
kubectl taint nodes worker9.kube.k-space.ee arch=arm64:NoSchedule
|
||||
```
|
||||
|
||||
For door controllers:
|
||||
|
||||
```
|
||||
for j in ground front back; do
|
||||
kubectl taint nodes door-${j}.kube.k-space.ee dedicated=door:NoSchedule
|
||||
kubectl label nodes door-${j}.kube.k-space.ee dedicated=door
|
||||
kubectl taint nodes door-${j}.kube.k-space.ee arch=arm64:NoSchedule
|
||||
done
|
||||
```
|
||||
|
||||
To reduce wear on storage:
|
||||
|
||||
```
|
||||
echo StandardOutput=null >> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
|
||||
systemctl daemon-reload
|
||||
systemctl restart kubelet
|
||||
```
|
||||
|
@ -36,13 +36,8 @@ kubectl -n argocd create secret generic gitea-kube-staging \
|
||||
--from-literal=type=git \
|
||||
--from-literal=url=git@git.k-space.ee:k-space/kube-staging \
|
||||
--from-file=sshPrivateKey=id_ecdsa
|
||||
kubectl -n argocd create secret generic gitea-kube-members \
|
||||
--from-literal=type=git \
|
||||
--from-literal=url=git@git.k-space.ee:k-space/kube-members \
|
||||
--from-file=sshPrivateKey=id_ecdsa
|
||||
kubectl label -n argocd secret gitea-kube argocd.argoproj.io/secret-type=repository
|
||||
kubectl label -n argocd secret gitea-kube-staging argocd.argoproj.io/secret-type=repository
|
||||
kubectl label -n argocd secret gitea-kube-members argocd.argoproj.io/secret-type=repository
|
||||
rm -fv id_ecdsa
|
||||
```
|
||||
|
||||
|
@ -1,17 +1,17 @@
|
||||
apiVersion: argoproj.io/v1alpha1
|
||||
kind: Application
|
||||
metadata:
|
||||
name: grafana
|
||||
name: foobar
|
||||
namespace: argocd
|
||||
spec:
|
||||
project: default
|
||||
source:
|
||||
repoURL: 'git@git.k-space.ee:k-space/kube.git'
|
||||
path: grafana
|
||||
path: foobar
|
||||
targetRevision: HEAD
|
||||
destination:
|
||||
server: 'https://kubernetes.default.svc'
|
||||
namespace: grafana
|
||||
namespace: foobar
|
||||
syncPolicy:
|
||||
syncOptions:
|
||||
- CreateNamespace=true
|
@ -5,16 +5,17 @@ metadata:
|
||||
namespace: argocd
|
||||
spec:
|
||||
project: default
|
||||
source:
|
||||
repoURL: 'git@git.k-space.ee:k-space/kube.git'
|
||||
path: elastic-system
|
||||
targetRevision: HEAD
|
||||
destination:
|
||||
server: 'https://kubernetes.default.svc'
|
||||
namespace: elastic-system
|
||||
syncPolicy:
|
||||
automated: {}
|
||||
syncOptions:
|
||||
- CreateNamespace=true
|
||||
source:
|
||||
repoURL: 'git@git.k-space.ee:k-space/kube.git'
|
||||
path: elastic-system
|
||||
targetRevision: HEAD
|
||||
ignoreDifferences:
|
||||
- group: admissionregistration.k8s.io
|
||||
kind: ValidatingWebhookConfiguration
|
||||
|
@ -1,17 +0,0 @@
|
||||
apiVersion: argoproj.io/v1alpha1
|
||||
kind: Application
|
||||
metadata:
|
||||
name: logmower
|
||||
namespace: argocd
|
||||
spec:
|
||||
project: default
|
||||
source:
|
||||
repoURL: 'git@git.k-space.ee:k-space/kube.git'
|
||||
path: logmower
|
||||
targetRevision: HEAD
|
||||
destination:
|
||||
server: 'https://kubernetes.default.svc'
|
||||
namespace: logmower
|
||||
syncPolicy:
|
||||
syncOptions:
|
||||
- CreateNamespace=true
|
@ -1,17 +0,0 @@
|
||||
apiVersion: argoproj.io/v1alpha1
|
||||
kind: Application
|
||||
metadata:
|
||||
name: members
|
||||
namespace: argocd
|
||||
spec:
|
||||
project: default
|
||||
source:
|
||||
repoURL: 'git@git.k-space.ee:k-space/kube-members.git'
|
||||
path: .
|
||||
targetRevision: HEAD
|
||||
destination:
|
||||
server: 'https://kubernetes.default.svc'
|
||||
namespace: members
|
||||
syncPolicy:
|
||||
syncOptions:
|
||||
- CreateNamespace=true
|
@ -16,6 +16,7 @@ server:
|
||||
ingress:
|
||||
enabled: true
|
||||
annotations:
|
||||
cert-manager.io/cluster-issuer: default
|
||||
external-dns.alpha.kubernetes.io/target: traefik.k-space.ee
|
||||
traefik.ingress.kubernetes.io/router.entrypoints: websecure
|
||||
traefik.ingress.kubernetes.io/router.tls: "true"
|
||||
@ -23,7 +24,8 @@ server:
|
||||
- argocd.k-space.ee
|
||||
tls:
|
||||
- hosts:
|
||||
- "*.k-space.ee"
|
||||
- argocd.k-space.ee
|
||||
secretName: argocd-server-tls
|
||||
configEnabled: true
|
||||
config:
|
||||
admin.enabled: "false"
|
||||
|
@ -162,8 +162,8 @@ kubectl -n argocd create secret generic argocd-secret \
|
||||
kubectl get secret -n authelia oidc-secrets -o json \
|
||||
| jq '.data."oidc-secrets.yml"' -r | base64 -d | yq -o json \
|
||||
| jq '.identity_providers.oidc.clients[] | select(.id == "argocd") | .secret' -r)
|
||||
kubectl -n grafana delete secret oidc-secret
|
||||
kubectl -n grafana create secret generic oidc-secret \
|
||||
kubectl -n monitoring delete secret oidc-secret
|
||||
kubectl -n monitoring create secret generic oidc-secret \
|
||||
--from-literal=GF_AUTH_GENERIC_OAUTH_CLIENT_SECRET=$( \
|
||||
kubectl get secret -n authelia oidc-secrets -o json \
|
||||
| jq '.data."oidc-secrets.yml"' -r | base64 -d | yq -o json \
|
||||
|
@ -295,6 +295,7 @@ metadata:
|
||||
labels:
|
||||
app.kubernetes.io/name: authelia
|
||||
annotations:
|
||||
cert-manager.io/cluster-issuer: default
|
||||
external-dns.alpha.kubernetes.io/target: traefik.k-space.ee
|
||||
kubernetes.io/tls-acme: "true"
|
||||
traefik.ingress.kubernetes.io/router.entryPoints: websecure
|
||||
@ -314,7 +315,8 @@ spec:
|
||||
number: 80
|
||||
tls:
|
||||
- hosts:
|
||||
- "*.k-space.ee"
|
||||
- auth.k-space.ee
|
||||
secretName: authelia-tls
|
||||
---
|
||||
apiVersion: traefik.containo.us/v1alpha1
|
||||
kind: Middleware
|
||||
|
@ -1,16 +1,7 @@
|
||||
To apply changes:
|
||||
|
||||
```
|
||||
kubectl apply -n camtiler \
|
||||
-f application.yml \
|
||||
-f persistence.yml \
|
||||
-f mongoexpress.yml \
|
||||
-f mongodb-support.yml \
|
||||
-f camera-tiler.yml \
|
||||
-f logmower.yml \
|
||||
-f ingress.yml \
|
||||
-f network-policies.yml \
|
||||
-f networkpolicy-base.yml
|
||||
kubectl apply -n camtiler -f application.yml -f persistence.yml -f mongoexpress.yml -f mongodb-support.yml -f networkpolicy-base.yml -f minio-support.yml
|
||||
```
|
||||
|
||||
To deploy changes:
|
||||
@ -24,16 +15,15 @@ To initialize secrets:
|
||||
```
|
||||
kubectl create secret generic -n camtiler mongodb-application-readwrite-password --from-literal="password=$(cat /dev/urandom | base64 | head -c 30)"
|
||||
kubectl create secret generic -n camtiler mongodb-application-readonly-password --from-literal="password=$(cat /dev/urandom | base64 | head -c 30)"
|
||||
kubectl create secret generic -n camtiler minio-secrets \
|
||||
kubectl create secret generic -n camtiler minio-secret \
|
||||
--from-literal=accesskey=application \
|
||||
--from-literal=secretkey=$(cat /dev/urandom | base64 | head -c 30)
|
||||
kubectl create secret generic -n camtiler minio-env-configuration \
|
||||
--from-literal="MINIO_BROWSER=off" \
|
||||
--from-literal="MINIO_ROOT_USER=root" \
|
||||
--from-literal="MINIO_ROOT_PASSWORD=$(cat /dev/urandom | base64 | head -c 30)"
|
||||
--from-literal="MINIO_ROOT_PASSWORD=$(cat /dev/urandom | base64 | head -c 30)" \
|
||||
--from-literal="MINIO_STORAGE_CLASS_STANDARD=EC:4"
|
||||
kubectl -n camtiler create secret generic camera-secrets \
|
||||
--from-literal=username=... \
|
||||
--from-literal=password=...
|
||||
```
|
||||
|
||||
To restart all deployments:
|
||||
|
||||
```
|
||||
for j in $(kubectl get deployments -n camtiler -o name); do kubectl rollout restart -n camtiler $j; done
|
||||
```
|
||||
|
@ -1,4 +1,397 @@
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: camtiler
|
||||
annotations:
|
||||
keel.sh/policy: force
|
||||
keel.sh/trigger: poll
|
||||
spec:
|
||||
revisionHistoryLimit: 0
|
||||
replicas: 2
|
||||
selector:
|
||||
matchLabels:
|
||||
app.kubernetes.io/name: camtiler
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app.kubernetes.io/name: camtiler
|
||||
component: camtiler
|
||||
spec:
|
||||
serviceAccountName: camtiler
|
||||
containers:
|
||||
- name: camtiler
|
||||
image: harbor.k-space.ee/k-space/camera-tiler:latest
|
||||
securityContext:
|
||||
readOnlyRootFilesystem: true
|
||||
runAsNonRoot: true
|
||||
runAsUser: 1000
|
||||
ports:
|
||||
- containerPort: 5001
|
||||
name: "http"
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: log-viewer-frontend
|
||||
annotations:
|
||||
keel.sh/policy: force
|
||||
keel.sh/trigger: poll
|
||||
spec:
|
||||
revisionHistoryLimit: 0
|
||||
replicas: 2
|
||||
selector:
|
||||
matchLabels:
|
||||
app.kubernetes.io/name: log-viewer-frontend
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app.kubernetes.io/name: log-viewer-frontend
|
||||
spec:
|
||||
containers:
|
||||
- name: log-viewer-frontend
|
||||
image: harbor.k-space.ee/k-space/log-viewer-frontend:latest
|
||||
# securityContext:
|
||||
# readOnlyRootFilesystem: true
|
||||
# runAsNonRoot: true
|
||||
# runAsUser: 1000
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: log-viewer-backend
|
||||
annotations:
|
||||
keel.sh/policy: force
|
||||
keel.sh/trigger: poll
|
||||
spec:
|
||||
revisionHistoryLimit: 0
|
||||
replicas: 3
|
||||
selector:
|
||||
matchLabels:
|
||||
app.kubernetes.io/name: log-viewer-backend
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app.kubernetes.io/name: log-viewer-backend
|
||||
spec:
|
||||
containers:
|
||||
- name: log-backend-backend
|
||||
image: harbor.k-space.ee/k-space/log-viewer:latest
|
||||
securityContext:
|
||||
readOnlyRootFilesystem: true
|
||||
runAsNonRoot: true
|
||||
runAsUser: 1000
|
||||
env:
|
||||
- name: MONGO_URI
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: mongodb-application-readwrite
|
||||
key: connectionString.standard
|
||||
- name: MINIO_BUCKET
|
||||
value: application
|
||||
- name: MINIO_HOSTNAME
|
||||
value: cams-s3.k-space.ee
|
||||
- name: MINIO_PORT
|
||||
value: "443"
|
||||
- name: MINIO_SCHEME
|
||||
value: "https"
|
||||
- name: MINIO_SECRET_KEY
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: minio-secret
|
||||
key: secretkey
|
||||
- name: MINIO_ACCESS_KEY
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: minio-secret
|
||||
key: accesskey
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: log-viewer-frontend
|
||||
spec:
|
||||
type: ClusterIP
|
||||
selector:
|
||||
app.kubernetes.io/name: log-viewer-frontend
|
||||
ports:
|
||||
- protocol: TCP
|
||||
port: 3003
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: log-viewer-backend
|
||||
spec:
|
||||
type: ClusterIP
|
||||
selector:
|
||||
app.kubernetes.io/name: log-viewer-backend
|
||||
ports:
|
||||
- protocol: TCP
|
||||
port: 3002
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: camtiler
|
||||
labels:
|
||||
component: camtiler
|
||||
spec:
|
||||
type: ClusterIP
|
||||
selector:
|
||||
app.kubernetes.io/name: camtiler
|
||||
component: camtiler
|
||||
ports:
|
||||
- protocol: TCP
|
||||
port: 5001
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
name: camtiler
|
||||
---
|
||||
kind: Role
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
metadata:
|
||||
name: camtiler
|
||||
rules:
|
||||
- apiGroups:
|
||||
- ""
|
||||
resources:
|
||||
- services
|
||||
verbs:
|
||||
- list
|
||||
---
|
||||
kind: RoleBinding
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
metadata:
|
||||
name: camtiler
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: camtiler
|
||||
apiGroup: ""
|
||||
roleRef:
|
||||
kind: Role
|
||||
name: camtiler
|
||||
apiGroup: ""
|
||||
---
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: camtiler
|
||||
annotations:
|
||||
kubernetes.io/ingress.class: traefik
|
||||
|
||||
# Following specifies the certificate issuer defined in
|
||||
# ../cert-manager/issuer.yml
|
||||
# This is where the HTTPS certificates for the
|
||||
# `tls:` section below are obtained from
|
||||
cert-manager.io/cluster-issuer: default
|
||||
|
||||
# This tells Traefik this Ingress object is associated with the
|
||||
# https:// entrypoint
|
||||
# Global http:// to https:// redirect is enabled in
|
||||
# ../traefik/values.yml using `globalArguments`
|
||||
traefik.ingress.kubernetes.io/router.entrypoints: websecure
|
||||
|
||||
# Following enables Authelia intercepting middleware
|
||||
# which makes sure user is authenticated and then
|
||||
# proceeds to inject Remote-User header for the application
|
||||
traefik.ingress.kubernetes.io/router.middlewares: traefik-sso@kubernetescrd
|
||||
|
||||
traefik.ingress.kubernetes.io/router.tls: "true"
|
||||
|
||||
# Following tells external-dns to add CNAME entry which makes
|
||||
# cams.k-space.ee point to same IP address as traefik.k-space.ee
|
||||
# The A record for traefik.k-space.ee is created via annotation
|
||||
# added in ../traefik/ingress.yml
|
||||
external-dns.alpha.kubernetes.io/target: traefik.k-space.ee
|
||||
spec:
|
||||
rules:
|
||||
- host: cams.k-space.ee
|
||||
http:
|
||||
paths:
|
||||
- pathType: Prefix
|
||||
path: "/tiled"
|
||||
backend:
|
||||
service:
|
||||
name: camtiler
|
||||
port:
|
||||
number: 5001
|
||||
- pathType: Prefix
|
||||
path: "/events"
|
||||
backend:
|
||||
service:
|
||||
name: log-viewer-backend
|
||||
port:
|
||||
number: 3002
|
||||
- pathType: Prefix
|
||||
path: "/"
|
||||
backend:
|
||||
service:
|
||||
name: log-viewer-frontend
|
||||
port:
|
||||
number: 3003
|
||||
tls:
|
||||
- hosts:
|
||||
- cams.k-space.ee
|
||||
secretName: camtiler-tls
|
||||
---
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: NetworkPolicy
|
||||
metadata:
|
||||
name: camera-motion-detect
|
||||
spec:
|
||||
podSelector:
|
||||
matchLabels:
|
||||
component: camdetect
|
||||
policyTypes:
|
||||
- Ingress
|
||||
- Egress
|
||||
ingress:
|
||||
- from:
|
||||
- podSelector:
|
||||
matchLabels:
|
||||
component: camtiler
|
||||
- from:
|
||||
- namespaceSelector:
|
||||
matchLabels:
|
||||
kubernetes.io/metadata.name: prometheus-operator
|
||||
podSelector:
|
||||
matchLabels:
|
||||
app.kubernetes.io/name: prometheus
|
||||
egress:
|
||||
- to:
|
||||
- ipBlock:
|
||||
# Permit access to cameras outside the cluster
|
||||
cidr: 100.102.0.0/16
|
||||
- to:
|
||||
- podSelector:
|
||||
matchLabels:
|
||||
app: mongodb-svc
|
||||
ports:
|
||||
- port: 27017
|
||||
- to:
|
||||
- podSelector:
|
||||
matchLabels:
|
||||
v1.min.io/tenant: minio
|
||||
ports:
|
||||
- port: 9000
|
||||
---
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: NetworkPolicy
|
||||
metadata:
|
||||
name: camera-tiler
|
||||
spec:
|
||||
podSelector:
|
||||
matchLabels:
|
||||
component: camtiler
|
||||
policyTypes:
|
||||
- Ingress
|
||||
- Egress
|
||||
egress:
|
||||
- to:
|
||||
- podSelector:
|
||||
matchLabels:
|
||||
component: camdetect
|
||||
ports:
|
||||
- port: 5000
|
||||
ingress:
|
||||
- from:
|
||||
- namespaceSelector:
|
||||
matchLabels:
|
||||
kubernetes.io/metadata.name: prometheus-operator
|
||||
podSelector:
|
||||
matchLabels:
|
||||
app.kubernetes.io/name: prometheus
|
||||
- from:
|
||||
- namespaceSelector:
|
||||
matchLabels:
|
||||
kubernetes.io/metadata.name: traefik
|
||||
podSelector:
|
||||
matchLabels:
|
||||
app.kubernetes.io/name: traefik
|
||||
---
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: NetworkPolicy
|
||||
metadata:
|
||||
name: log-viewer-backend
|
||||
spec:
|
||||
podSelector:
|
||||
matchLabels:
|
||||
app.kubernetes.io/name: log-viewer-backend
|
||||
policyTypes:
|
||||
- Ingress
|
||||
- Egress
|
||||
egress:
|
||||
- to:
|
||||
- podSelector:
|
||||
matchLabels:
|
||||
app: mongodb-svc
|
||||
- to:
|
||||
# Minio access via Traefik's public endpoint
|
||||
- namespaceSelector:
|
||||
matchLabels:
|
||||
kubernetes.io/metadata.name: traefik
|
||||
podSelector:
|
||||
matchLabels:
|
||||
app.kubernetes.io/name: traefik
|
||||
ingress:
|
||||
- from:
|
||||
- namespaceSelector:
|
||||
matchLabels:
|
||||
kubernetes.io/metadata.name: traefik
|
||||
podSelector:
|
||||
matchLabels:
|
||||
app.kubernetes.io/name: traefik
|
||||
---
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: NetworkPolicy
|
||||
metadata:
|
||||
name: log-viewer-frontend
|
||||
spec:
|
||||
podSelector:
|
||||
matchLabels:
|
||||
app.kubernetes.io/name: log-viewer-frontend
|
||||
policyTypes:
|
||||
- Ingress
|
||||
- Egress
|
||||
ingress:
|
||||
- from:
|
||||
- namespaceSelector:
|
||||
matchLabels:
|
||||
kubernetes.io/metadata.name: traefik
|
||||
podSelector:
|
||||
matchLabels:
|
||||
app.kubernetes.io/name: traefik
|
||||
---
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: minio
|
||||
annotations:
|
||||
kubernetes.io/ingress.class: traefik
|
||||
cert-manager.io/cluster-issuer: default
|
||||
traefik.ingress.kubernetes.io/router.entrypoints: websecure
|
||||
traefik.ingress.kubernetes.io/router.tls: "true"
|
||||
external-dns.alpha.kubernetes.io/target: traefik.k-space.ee
|
||||
spec:
|
||||
rules:
|
||||
- host: cams-s3.k-space.ee
|
||||
http:
|
||||
paths:
|
||||
- pathType: Prefix
|
||||
path: "/"
|
||||
backend:
|
||||
service:
|
||||
name: minio
|
||||
port:
|
||||
number: 80
|
||||
tls:
|
||||
- hosts:
|
||||
- cams-s3.k-space.ee
|
||||
secretName: cams-s3-tls
|
||||
---
|
||||
apiVersion: apiextensions.k8s.io/v1
|
||||
kind: CustomResourceDefinition
|
||||
metadata:
|
||||
@ -89,13 +482,12 @@ spec:
|
||||
metadata:
|
||||
name: foobar
|
||||
labels:
|
||||
app.kubernetes.io/name: foobar
|
||||
component: camera-motion-detect
|
||||
component: camdetect
|
||||
spec:
|
||||
type: ClusterIP
|
||||
selector:
|
||||
app.kubernetes.io/name: foobar
|
||||
component: camera-motion-detect
|
||||
component: camdetect
|
||||
ports:
|
||||
- protocol: TCP
|
||||
port: 80
|
||||
@ -110,15 +502,14 @@ spec:
|
||||
keel.sh/policy: force
|
||||
keel.sh/trigger: poll
|
||||
spec:
|
||||
revisionHistoryLimit: 0
|
||||
replicas: 1
|
||||
|
||||
# Make sure we do not congest the network during rollout
|
||||
strategy:
|
||||
type: RollingUpdate
|
||||
rollingUpdate:
|
||||
# Swap following two with replicas: 2
|
||||
maxSurge: 1
|
||||
maxUnavailable: 0
|
||||
maxSurge: 0
|
||||
maxUnavailable: 1
|
||||
selector:
|
||||
matchLabels:
|
||||
app.kubernetes.io/name: foobar
|
||||
@ -126,25 +517,18 @@ spec:
|
||||
metadata:
|
||||
labels:
|
||||
app.kubernetes.io/name: foobar
|
||||
component: camera-motion-detect
|
||||
component: camdetect
|
||||
spec:
|
||||
containers:
|
||||
- name: camera-motion-detect
|
||||
- name: camdetect
|
||||
image: harbor.k-space.ee/k-space/camera-motion-detect:latest
|
||||
starupProbe:
|
||||
httpGet:
|
||||
path: /healthz
|
||||
port: 5000
|
||||
initialDelaySeconds: 2
|
||||
periodSeconds: 180
|
||||
timeoutSeconds: 60
|
||||
readinessProbe:
|
||||
httpGet:
|
||||
path: /readyz
|
||||
port: 5000
|
||||
initialDelaySeconds: 60
|
||||
periodSeconds: 60
|
||||
timeoutSeconds: 5
|
||||
initialDelaySeconds: 10
|
||||
periodSeconds: 180
|
||||
timeoutSeconds: 60
|
||||
ports:
|
||||
- containerPort: 5000
|
||||
name: "http"
|
||||
@ -154,7 +538,7 @@ spec:
|
||||
cpu: "200m"
|
||||
limits:
|
||||
memory: "256Mi"
|
||||
cpu: "4000m"
|
||||
cpu: "1"
|
||||
securityContext:
|
||||
readOnlyRootFilesystem: true
|
||||
runAsNonRoot: true
|
||||
@ -182,13 +566,13 @@ spec:
|
||||
- name: AWS_SECRET_ACCESS_KEY
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: minio-secrets
|
||||
key: MINIO_ROOT_PASSWORD
|
||||
name: minio-secret
|
||||
key: secretkey
|
||||
- name: AWS_ACCESS_KEY_ID
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: minio-secrets
|
||||
key: MINIO_ROOT_USER
|
||||
name: minio-secret
|
||||
key: accesskey
|
||||
|
||||
# Make sure 2+ pods of same camera are scheduled on different hosts
|
||||
affinity:
|
||||
@ -196,7 +580,7 @@ spec:
|
||||
requiredDuringSchedulingIgnoredDuringExecution:
|
||||
- labelSelector:
|
||||
matchExpressions:
|
||||
- key: app.kubernetes.io/name
|
||||
- key: app
|
||||
operator: In
|
||||
values:
|
||||
- foobar
|
||||
@ -210,7 +594,18 @@ spec:
|
||||
labelSelector:
|
||||
matchLabels:
|
||||
app.kubernetes.io/name: foobar
|
||||
component: camera-motion-detect
|
||||
component: camdetect
|
||||
---
|
||||
apiVersion: monitoring.coreos.com/v1
|
||||
kind: PodMonitor
|
||||
metadata:
|
||||
name: camtiler
|
||||
spec:
|
||||
selector: {}
|
||||
podMetricsEndpoints:
|
||||
- port: http
|
||||
podTargetLabels:
|
||||
- app.kubernetes.io/name
|
||||
---
|
||||
apiVersion: monitoring.coreos.com/v1
|
||||
kind: PrometheusRule
|
||||
@ -221,21 +616,21 @@ spec:
|
||||
- name: cameras
|
||||
rules:
|
||||
- alert: CameraLost
|
||||
expr: rate(camtiler_frames_total{stage="downloaded"}[1m]) < 1
|
||||
expr: rate(camdetect_rx_frames_total[2m]) < 1
|
||||
for: 2m
|
||||
labels:
|
||||
severity: warning
|
||||
annotations:
|
||||
summary: Camera feed stopped
|
||||
- alert: CameraServerRoomMotion
|
||||
expr: rate(camtiler_events_total{app_kubernetes_io_name="server-room"}[30m]) > 0
|
||||
expr: camdetect_event_active {app="camdetect-server-room"} > 0
|
||||
for: 1m
|
||||
labels:
|
||||
severity: warning
|
||||
annotations:
|
||||
summary: Motion was detected in server room
|
||||
- alert: CameraSlowUploads
|
||||
expr: camtiler_queue_frames{stage="upload"} > 10
|
||||
expr: rate(camdetect_upload_dropped_frames_total[2m]) > 1
|
||||
for: 5m
|
||||
labels:
|
||||
severity: warning
|
||||
@ -243,20 +638,13 @@ spec:
|
||||
summary: Motion detect snapshots are piling up and
|
||||
not getting uploaded to S3
|
||||
- alert: CameraSlowProcessing
|
||||
expr: camtiler_queue_frames{stage="download"} > 10
|
||||
expr: rate(camdetect_download_dropped_frames_total[2m]) > 1
|
||||
for: 5m
|
||||
labels:
|
||||
severity: warning
|
||||
annotations:
|
||||
summary: Motion detection processing pipeline is not keeping up
|
||||
with incoming frames
|
||||
- alert: CameraResourcesThrottled
|
||||
expr: sum by (pod) (rate(container_cpu_cfs_throttled_periods_total{namespace="camtiler"}[1m])) > 0
|
||||
for: 5m
|
||||
labels:
|
||||
severity: warning
|
||||
annotations:
|
||||
summary: CPU limits are bottleneck
|
||||
---
|
||||
apiVersion: k-space.ee/v1alpha1
|
||||
kind: Camera
|
||||
@ -265,7 +653,6 @@ metadata:
|
||||
spec:
|
||||
target: http://user@workshop.cam.k-space.ee:8080/?action=stream
|
||||
secretRef: camera-secrets
|
||||
replicas: 1
|
||||
---
|
||||
apiVersion: k-space.ee/v1alpha1
|
||||
kind: Camera
|
||||
@ -274,7 +661,6 @@ metadata:
|
||||
spec:
|
||||
target: http://user@server-room.cam.k-space.ee:8080/?action=stream
|
||||
secretRef: camera-secrets
|
||||
replicas: 1
|
||||
---
|
||||
apiVersion: k-space.ee/v1alpha1
|
||||
kind: Camera
|
||||
@ -283,7 +669,6 @@ metadata:
|
||||
spec:
|
||||
target: http://user@printer.cam.k-space.ee:8080/?action=stream
|
||||
secretRef: camera-secrets
|
||||
replicas: 1
|
||||
---
|
||||
apiVersion: k-space.ee/v1alpha1
|
||||
kind: Camera
|
||||
@ -292,7 +677,6 @@ metadata:
|
||||
spec:
|
||||
target: http://user@chaos.cam.k-space.ee:8080/?action=stream
|
||||
secretRef: camera-secrets
|
||||
replicas: 1
|
||||
---
|
||||
apiVersion: k-space.ee/v1alpha1
|
||||
kind: Camera
|
||||
@ -301,7 +685,6 @@ metadata:
|
||||
spec:
|
||||
target: http://user@cyber.cam.k-space.ee:8080/?action=stream
|
||||
secretRef: camera-secrets
|
||||
replicas: 1
|
||||
---
|
||||
apiVersion: k-space.ee/v1alpha1
|
||||
kind: Camera
|
||||
@ -310,7 +693,6 @@ metadata:
|
||||
spec:
|
||||
target: http://user@kitchen.cam.k-space.ee:8080/?action=stream
|
||||
secretRef: camera-secrets
|
||||
replicas: 1
|
||||
---
|
||||
apiVersion: k-space.ee/v1alpha1
|
||||
kind: Camera
|
||||
@ -319,7 +701,6 @@ metadata:
|
||||
spec:
|
||||
target: http://user@back-door.cam.k-space.ee:8080/?action=stream
|
||||
secretRef: camera-secrets
|
||||
replicas: 1
|
||||
---
|
||||
apiVersion: k-space.ee/v1alpha1
|
||||
kind: Camera
|
||||
@ -328,4 +709,3 @@ metadata:
|
||||
spec:
|
||||
target: http://user@ground-door.cam.k-space.ee:8080/?action=stream
|
||||
secretRef: camera-secrets
|
||||
replicas: 1
|
||||
|
@ -1,98 +0,0 @@
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: camera-tiler
|
||||
annotations:
|
||||
keel.sh/policy: force
|
||||
keel.sh/trigger: poll
|
||||
spec:
|
||||
revisionHistoryLimit: 0
|
||||
replicas: 2
|
||||
selector:
|
||||
matchLabels: &selectorLabels
|
||||
app.kubernetes.io/name: camtiler
|
||||
component: camera-tiler
|
||||
template:
|
||||
metadata:
|
||||
labels: *selectorLabels
|
||||
spec:
|
||||
serviceAccountName: camera-tiler
|
||||
containers:
|
||||
- name: camera-tiler
|
||||
image: harbor.k-space.ee/k-space/camera-tiler:latest
|
||||
securityContext:
|
||||
readOnlyRootFilesystem: true
|
||||
runAsNonRoot: true
|
||||
runAsUser: 1000
|
||||
ports:
|
||||
- containerPort: 5001
|
||||
name: "http"
|
||||
resources:
|
||||
requests:
|
||||
memory: "200Mi"
|
||||
cpu: "100m"
|
||||
limits:
|
||||
memory: "500Mi"
|
||||
cpu: "4000m"
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: camera-tiler
|
||||
labels:
|
||||
app.kubernetes.io/name: camtiler
|
||||
component: camera-tiler
|
||||
spec:
|
||||
type: ClusterIP
|
||||
selector:
|
||||
app.kubernetes.io/name: camtiler
|
||||
component: camera-tiler
|
||||
ports:
|
||||
- protocol: TCP
|
||||
port: 5001
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
name: camera-tiler
|
||||
---
|
||||
kind: Role
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
metadata:
|
||||
name: camera-tiler
|
||||
rules:
|
||||
- apiGroups:
|
||||
- ""
|
||||
resources:
|
||||
- services
|
||||
verbs:
|
||||
- list
|
||||
---
|
||||
kind: RoleBinding
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
metadata:
|
||||
name: camera-tiler
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: camera-tiler
|
||||
apiGroup: ""
|
||||
roleRef:
|
||||
kind: Role
|
||||
name: camera-tiler
|
||||
apiGroup: ""
|
||||
---
|
||||
apiVersion: monitoring.coreos.com/v1
|
||||
kind: PodMonitor
|
||||
metadata:
|
||||
name: camtiler
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
app.kubernetes.io/name: camtiler
|
||||
component: camera-tiler
|
||||
podMetricsEndpoints:
|
||||
- port: http
|
||||
podTargetLabels:
|
||||
- app.kubernetes.io/name
|
||||
- component
|
@ -1,67 +0,0 @@
|
||||
---
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: camtiler
|
||||
annotations:
|
||||
kubernetes.io/ingress.class: traefik
|
||||
traefik.ingress.kubernetes.io/router.entrypoints: websecure
|
||||
traefik.ingress.kubernetes.io/router.middlewares: traefik-sso@kubernetescrd,camtiler-redirect@kubernetescrd
|
||||
traefik.ingress.kubernetes.io/router.tls: "true"
|
||||
external-dns.alpha.kubernetes.io/target: traefik.k-space.ee
|
||||
spec:
|
||||
rules:
|
||||
- host: cams.k-space.ee
|
||||
http:
|
||||
paths:
|
||||
- pathType: Prefix
|
||||
path: "/"
|
||||
backend:
|
||||
service:
|
||||
name: logmower-frontend
|
||||
port:
|
||||
number: 8080
|
||||
- host: cam.k-space.ee
|
||||
http:
|
||||
paths:
|
||||
- pathType: Prefix
|
||||
path: "/tiled"
|
||||
backend:
|
||||
service:
|
||||
name: camera-tiler
|
||||
port:
|
||||
number: 5001
|
||||
- pathType: Prefix
|
||||
path: "/m"
|
||||
backend:
|
||||
service:
|
||||
name: camera-tiler
|
||||
port:
|
||||
number: 5001
|
||||
- pathType: Prefix
|
||||
path: "/events"
|
||||
backend:
|
||||
service:
|
||||
name: logmower-eventsource
|
||||
port:
|
||||
number: 3002
|
||||
- pathType: Prefix
|
||||
path: "/"
|
||||
backend:
|
||||
service:
|
||||
name: logmower-frontend
|
||||
port:
|
||||
number: 8080
|
||||
tls:
|
||||
- hosts:
|
||||
- "*.k-space.ee"
|
||||
---
|
||||
apiVersion: traefik.containo.us/v1alpha1
|
||||
kind: Middleware
|
||||
metadata:
|
||||
name: redirect
|
||||
spec:
|
||||
redirectRegex:
|
||||
regex: ^https://cams.k-space.ee/(.*)$
|
||||
replacement: https://cam.k-space.ee/$1
|
||||
permanent: false
|
@ -1,137 +0,0 @@
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: logmower-eventsource
|
||||
spec:
|
||||
revisionHistoryLimit: 0
|
||||
replicas: 2
|
||||
selector:
|
||||
matchLabels: &selectorLabels
|
||||
app.kubernetes.io/name: camtiler
|
||||
component: logmower-eventsource
|
||||
template:
|
||||
metadata:
|
||||
labels: *selectorLabels
|
||||
spec:
|
||||
containers:
|
||||
- name: logmower-eventsource
|
||||
image: harbor.k-space.ee/k-space/logmower-eventsource
|
||||
ports:
|
||||
- containerPort: 3002
|
||||
name: nodejs
|
||||
env:
|
||||
- name: MONGO_COLLECTION
|
||||
value: eventlog
|
||||
- name: MONGODB_HOST
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: mongodb-application-readonly
|
||||
key: connectionString.standard
|
||||
- name: BACKEND
|
||||
value: 'camtiler'
|
||||
- name: BACKEND_BROKER_URL
|
||||
value: 'http://logmower-event-broker'
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: logmower-event-broker
|
||||
spec:
|
||||
revisionHistoryLimit: 0
|
||||
replicas: 5
|
||||
selector:
|
||||
matchLabels: &selectorLabels
|
||||
app.kubernetes.io/name: camtiler
|
||||
component: logmower-event-broker
|
||||
template:
|
||||
metadata:
|
||||
labels: *selectorLabels
|
||||
spec:
|
||||
containers:
|
||||
- name: logmower-event-broker
|
||||
image: harbor.k-space.ee/k-space/camera-event-broker
|
||||
ports:
|
||||
- containerPort: 3000
|
||||
env:
|
||||
- name: AWS_SECRET_ACCESS_KEY
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: minio-secrets
|
||||
key: MINIO_ROOT_PASSWORD
|
||||
- name: AWS_ACCESS_KEY_ID
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: minio-secrets
|
||||
key: MINIO_ROOT_USER
|
||||
- name: MINIO_BUCKET
|
||||
value: 'application'
|
||||
- name: MINIO_HOSTNAME
|
||||
value: 'cams-s3.k-space.ee'
|
||||
- name: MINIO_PORT
|
||||
value: '443'
|
||||
- name: MINIO_SCHEMA
|
||||
value: 'https'
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: logmower-frontend
|
||||
spec:
|
||||
revisionHistoryLimit: 0
|
||||
replicas: 2
|
||||
selector:
|
||||
matchLabels: &selectorLabels
|
||||
app.kubernetes.io/name: camtiler
|
||||
component: logmower-frontend
|
||||
template:
|
||||
metadata:
|
||||
labels: *selectorLabels
|
||||
spec:
|
||||
containers:
|
||||
- name: logmower-frontend
|
||||
image: harbor.k-space.ee/k-space/logmower-frontend
|
||||
ports:
|
||||
- containerPort: 8080
|
||||
name: http
|
||||
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: logmower-frontend
|
||||
spec:
|
||||
type: ClusterIP
|
||||
selector:
|
||||
app.kubernetes.io/name: camtiler
|
||||
component: logmower-frontend
|
||||
ports:
|
||||
- protocol: TCP
|
||||
port: 8080
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: logmower-eventsource
|
||||
spec:
|
||||
type: ClusterIP
|
||||
selector:
|
||||
app.kubernetes.io/name: camtiler
|
||||
component: logmower-eventsource
|
||||
ports:
|
||||
- protocol: TCP
|
||||
port: 3002
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: logmower-event-broker
|
||||
spec:
|
||||
type: ClusterIP
|
||||
selector:
|
||||
app.kubernetes.io/name: camtiler
|
||||
component: logmower-event-broker
|
||||
ports:
|
||||
- protocol: TCP
|
||||
port: 80
|
||||
targetPort: 3000
|
1
camtiler/minio-support.yml
Symbolic link
1
camtiler/minio-support.yml
Symbolic link
@ -0,0 +1 @@
|
||||
../shared/minio-support.yml
|
@ -1,199 +0,0 @@
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: StatefulSet
|
||||
metadata:
|
||||
name: minio
|
||||
labels:
|
||||
app.kubernetes.io/name: minio
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
app.kubernetes.io/name: minio
|
||||
serviceName: minio-svc
|
||||
replicas: 4
|
||||
podManagementPolicy: Parallel
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app.kubernetes.io/name: minio
|
||||
spec:
|
||||
affinity:
|
||||
podAntiAffinity:
|
||||
requiredDuringSchedulingIgnoredDuringExecution:
|
||||
- labelSelector:
|
||||
matchExpressions:
|
||||
- key: app
|
||||
operator: In
|
||||
values:
|
||||
- minio
|
||||
topologyKey: kubernetes.io/hostname
|
||||
nodeSelector:
|
||||
dedicated: storage
|
||||
tolerations:
|
||||
- key: dedicated
|
||||
operator: Equal
|
||||
value: storage
|
||||
effect: NoSchedule
|
||||
containers:
|
||||
- name: minio
|
||||
env:
|
||||
- name: MINIO_PROMETHEUS_AUTH_TYPE
|
||||
value: public
|
||||
envFrom:
|
||||
- secretRef:
|
||||
name: minio-secrets
|
||||
image: minio/minio:RELEASE.2022-12-12T19-27-27Z
|
||||
args:
|
||||
- server
|
||||
- http://minio-{0...3}.minio-svc.camtiler.svc.cluster.local/data
|
||||
- --address
|
||||
- 0.0.0.0:9000
|
||||
- --console-address
|
||||
- 0.0.0.0:9001
|
||||
ports:
|
||||
- containerPort: 9000
|
||||
name: http
|
||||
- containerPort: 9001
|
||||
name: console
|
||||
readinessProbe:
|
||||
httpGet:
|
||||
path: /minio/health/ready
|
||||
port: 9000
|
||||
initialDelaySeconds: 2
|
||||
periodSeconds: 5
|
||||
resources:
|
||||
requests:
|
||||
cpu: 300m
|
||||
memory: 1Gi
|
||||
limits:
|
||||
cpu: 4000m
|
||||
memory: 2Gi
|
||||
volumeMounts:
|
||||
- name: minio-data
|
||||
mountPath: /data
|
||||
volumeClaimTemplates:
|
||||
- metadata:
|
||||
name: minio-data
|
||||
spec:
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
resources:
|
||||
requests:
|
||||
storage: '30Gi'
|
||||
storageClassName: minio
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: minio
|
||||
spec:
|
||||
sessionAffinity: ClientIP
|
||||
type: ClusterIP
|
||||
ports:
|
||||
- port: 80
|
||||
targetPort: 9000
|
||||
protocol: TCP
|
||||
name: http
|
||||
selector:
|
||||
app.kubernetes.io/name: minio
|
||||
---
|
||||
kind: Service
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: minio-svc
|
||||
spec:
|
||||
selector:
|
||||
app.kubernetes.io/name: minio
|
||||
clusterIP: None
|
||||
publishNotReadyAddresses: true
|
||||
ports:
|
||||
- name: http
|
||||
port: 9000
|
||||
- name: console
|
||||
port: 9001
|
||||
---
|
||||
apiVersion: monitoring.coreos.com/v1
|
||||
kind: PodMonitor
|
||||
metadata:
|
||||
name: minio
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
app.kubernetes.io/name: minio
|
||||
podMetricsEndpoints:
|
||||
- port: http
|
||||
path: /minio/v2/metrics/node
|
||||
podTargetLabels:
|
||||
- app.kubernetes.io/name
|
||||
---
|
||||
apiVersion: monitoring.coreos.com/v1
|
||||
kind: ServiceMonitor
|
||||
metadata:
|
||||
name: minio
|
||||
spec:
|
||||
endpoints:
|
||||
- bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
|
||||
honorLabels: true
|
||||
port: minio
|
||||
path: /minio/v2/metrics/cluster
|
||||
selector:
|
||||
matchLabels:
|
||||
app.kubernetes.io/name: minio
|
||||
---
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: minio
|
||||
annotations:
|
||||
kubernetes.io/ingress.class: traefik
|
||||
traefik.ingress.kubernetes.io/router.entrypoints: websecure
|
||||
traefik.ingress.kubernetes.io/router.tls: "true"
|
||||
external-dns.alpha.kubernetes.io/target: traefik.k-space.ee
|
||||
spec:
|
||||
rules:
|
||||
- host: cams-s3.k-space.ee
|
||||
http:
|
||||
paths:
|
||||
- pathType: Prefix
|
||||
path: "/"
|
||||
backend:
|
||||
service:
|
||||
name: minio-svc
|
||||
port:
|
||||
name: http
|
||||
tls:
|
||||
- hosts:
|
||||
- "*.k-space.ee"
|
||||
---
|
||||
apiVersion: monitoring.coreos.com/v1
|
||||
kind: PrometheusRule
|
||||
metadata:
|
||||
name: minio
|
||||
spec:
|
||||
groups:
|
||||
- name: minio
|
||||
rules:
|
||||
- alert: MinioClusterDiskOffline
|
||||
expr: minio_cluster_disk_offline_total > 0
|
||||
for: 0m
|
||||
labels:
|
||||
severity: critical
|
||||
annotations:
|
||||
summary: Minio cluster disk offline (instance {{ $labels.instance }})
|
||||
description: "Minio cluster disk is offline"
|
||||
- alert: MinioNodeDiskOffline
|
||||
expr: minio_cluster_nodes_offline_total > 0
|
||||
for: 0m
|
||||
labels:
|
||||
severity: critical
|
||||
annotations:
|
||||
summary: Minio node disk offline (instance {{ $labels.instance }})
|
||||
description: "Minio cluster node disk is offline"
|
||||
- alert: MinioDiskSpaceUsage
|
||||
expr: disk_storage_available / disk_storage_total * 100 < 10
|
||||
for: 0m
|
||||
labels:
|
||||
severity: warning
|
||||
annotations:
|
||||
summary: Minio disk space usage (instance {{ $labels.instance }})
|
||||
description: "Minio available free space is low (< 10%)"
|
@ -1,192 +0,0 @@
|
||||
---
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: NetworkPolicy
|
||||
metadata:
|
||||
name: camera-motion-detect
|
||||
spec:
|
||||
podSelector:
|
||||
matchLabels:
|
||||
component: camera-motion-detect
|
||||
policyTypes:
|
||||
- Ingress
|
||||
- Egress
|
||||
ingress:
|
||||
- from:
|
||||
- podSelector:
|
||||
matchLabels:
|
||||
app.kubernetes.io/name: camtiler
|
||||
component: camera-tiler
|
||||
- from:
|
||||
- namespaceSelector:
|
||||
matchLabels:
|
||||
kubernetes.io/metadata.name: prometheus-operator
|
||||
podSelector:
|
||||
matchLabels:
|
||||
app.kubernetes.io/name: prometheus
|
||||
egress:
|
||||
- to:
|
||||
- ipBlock:
|
||||
# Permit access to cameras outside the cluster
|
||||
cidr: 100.102.0.0/16
|
||||
- to:
|
||||
- podSelector:
|
||||
matchLabels:
|
||||
app: mongodb-svc
|
||||
ports:
|
||||
- port: 27017
|
||||
- to:
|
||||
- podSelector:
|
||||
matchLabels:
|
||||
app.kubernetes.io/name: minio
|
||||
ports:
|
||||
- port: 9000
|
||||
---
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: NetworkPolicy
|
||||
metadata:
|
||||
name: camera-tiler
|
||||
spec:
|
||||
podSelector:
|
||||
matchLabels:
|
||||
app.kubernetes.io/name: camtiler
|
||||
component: camera-tiler
|
||||
policyTypes:
|
||||
- Ingress
|
||||
- Egress
|
||||
egress:
|
||||
- to:
|
||||
- podSelector:
|
||||
matchLabels:
|
||||
component: camera-motion-detect
|
||||
ports:
|
||||
- port: 5000
|
||||
ingress:
|
||||
- from:
|
||||
- namespaceSelector:
|
||||
matchLabels:
|
||||
kubernetes.io/metadata.name: prometheus-operator
|
||||
podSelector:
|
||||
matchLabels:
|
||||
app.kubernetes.io/name: prometheus
|
||||
- from:
|
||||
- namespaceSelector:
|
||||
matchLabels:
|
||||
kubernetes.io/metadata.name: traefik
|
||||
podSelector:
|
||||
matchLabels:
|
||||
app.kubernetes.io/name: traefik
|
||||
---
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: NetworkPolicy
|
||||
metadata:
|
||||
name: logmower-eventsource
|
||||
spec:
|
||||
podSelector:
|
||||
matchLabels:
|
||||
app.kubernetes.io/name: camtiler
|
||||
component: logmower-eventsource
|
||||
policyTypes:
|
||||
- Ingress
|
||||
- Egress
|
||||
egress:
|
||||
- to:
|
||||
- podSelector:
|
||||
matchLabels:
|
||||
app: mongodb-svc
|
||||
- podSelector:
|
||||
matchLabels:
|
||||
component: logmower-event-broker
|
||||
ingress:
|
||||
- from:
|
||||
- namespaceSelector:
|
||||
matchLabels:
|
||||
kubernetes.io/metadata.name: traefik
|
||||
podSelector:
|
||||
matchLabels:
|
||||
app.kubernetes.io/name: traefik
|
||||
---
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: NetworkPolicy
|
||||
metadata:
|
||||
name: logmower-event-broker
|
||||
spec:
|
||||
podSelector:
|
||||
matchLabels:
|
||||
app.kubernetes.io/name: camtiler
|
||||
component: logmower-event-broker
|
||||
policyTypes:
|
||||
- Ingress
|
||||
- Egress
|
||||
egress:
|
||||
- to:
|
||||
# Minio access via Traefik's public endpoint
|
||||
- namespaceSelector:
|
||||
matchLabels:
|
||||
kubernetes.io/metadata.name: traefik
|
||||
podSelector:
|
||||
matchLabels:
|
||||
app.kubernetes.io/name: traefik
|
||||
ingress:
|
||||
- from:
|
||||
- podSelector:
|
||||
matchLabels:
|
||||
component: logmower-eventsource
|
||||
---
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: NetworkPolicy
|
||||
metadata:
|
||||
name: logmower-frontend
|
||||
spec:
|
||||
podSelector:
|
||||
matchLabels:
|
||||
app.kubernetes.io/name: camtiler
|
||||
component: logmower-frontend
|
||||
policyTypes:
|
||||
- Ingress
|
||||
- Egress
|
||||
ingress:
|
||||
- from:
|
||||
- namespaceSelector:
|
||||
matchLabels:
|
||||
kubernetes.io/metadata.name: traefik
|
||||
podSelector:
|
||||
matchLabels:
|
||||
app.kubernetes.io/name: traefik
|
||||
---
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: NetworkPolicy
|
||||
metadata:
|
||||
name: minio
|
||||
spec:
|
||||
podSelector:
|
||||
matchLabels:
|
||||
app.kubernetes.io/name: minio
|
||||
policyTypes:
|
||||
- Ingress
|
||||
- Egress
|
||||
egress:
|
||||
- ports:
|
||||
- port: http
|
||||
to:
|
||||
- podSelector:
|
||||
matchLabels:
|
||||
app.kubernetes.io/name: minio
|
||||
ingress:
|
||||
- ports:
|
||||
- port: http
|
||||
from:
|
||||
- podSelector: {}
|
||||
- from:
|
||||
- namespaceSelector:
|
||||
matchLabels:
|
||||
kubernetes.io/metadata.name: traefik
|
||||
podSelector:
|
||||
matchLabels:
|
||||
app.kubernetes.io/name: traefik
|
||||
- from:
|
||||
- namespaceSelector:
|
||||
matchLabels:
|
||||
kubernetes.io/metadata.name: prometheus-operator
|
||||
podSelector:
|
||||
matchLabels:
|
||||
app.kubernetes.io/name: prometheus
|
@ -7,10 +7,9 @@ spec:
|
||||
additionalMongodConfig:
|
||||
systemLog:
|
||||
quiet: true
|
||||
members: 2
|
||||
arbiters: 1
|
||||
members: 3
|
||||
type: ReplicaSet
|
||||
version: "6.0.3"
|
||||
version: "5.0.9"
|
||||
security:
|
||||
authentication:
|
||||
modes: ["SCRAM"]
|
||||
@ -28,7 +27,7 @@ spec:
|
||||
passwordSecretRef:
|
||||
name: mongodb-application-readonly-password
|
||||
roles:
|
||||
- name: read
|
||||
- name: readOnly
|
||||
db: application
|
||||
scramCredentialsSecretName: mongodb-application-readonly
|
||||
statefulSet:
|
||||
@ -36,24 +35,6 @@ spec:
|
||||
logLevel: WARN
|
||||
template:
|
||||
spec:
|
||||
containers:
|
||||
- name: mongod
|
||||
resources:
|
||||
requests:
|
||||
cpu: 100m
|
||||
memory: 512Mi
|
||||
limits:
|
||||
cpu: 500m
|
||||
memory: 1Gi
|
||||
volumeMounts:
|
||||
- name: journal-volume
|
||||
mountPath: /data/journal
|
||||
- name: mongodb-agent
|
||||
resources:
|
||||
requests:
|
||||
cpu: 1m
|
||||
memory: 100Mi
|
||||
limits: {}
|
||||
affinity:
|
||||
podAntiAffinity:
|
||||
requiredDuringSchedulingIgnoredDuringExecution:
|
||||
@ -74,21 +55,8 @@ spec:
|
||||
volumeClaimTemplates:
|
||||
- metadata:
|
||||
name: logs-volume
|
||||
labels:
|
||||
usecase: logs
|
||||
spec:
|
||||
storageClassName: mongo
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
resources:
|
||||
requests:
|
||||
storage: 100Mi
|
||||
- metadata:
|
||||
name: journal-volume
|
||||
labels:
|
||||
usecase: journal
|
||||
spec:
|
||||
storageClassName: mongo
|
||||
storageClassName: local-path
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
resources:
|
||||
@ -96,12 +64,67 @@ spec:
|
||||
storage: 512Mi
|
||||
- metadata:
|
||||
name: data-volume
|
||||
labels:
|
||||
usecase: data
|
||||
spec:
|
||||
storageClassName: mongo
|
||||
storageClassName: local-path
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
resources:
|
||||
requests:
|
||||
storage: 2Gi
|
||||
---
|
||||
apiVersion: minio.min.io/v2
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: minio
|
||||
annotations:
|
||||
prometheus.io/path: /minio/prometheus/metrics
|
||||
prometheus.io/port: "9000"
|
||||
prometheus.io/scrape: "true"
|
||||
spec:
|
||||
credsSecret:
|
||||
name: minio-secret
|
||||
buckets:
|
||||
- name: application
|
||||
requestAutoCert: false
|
||||
users:
|
||||
- name: minio-user-0
|
||||
pools:
|
||||
- name: pool-0
|
||||
affinity:
|
||||
podAntiAffinity:
|
||||
requiredDuringSchedulingIgnoredDuringExecution:
|
||||
- labelSelector:
|
||||
matchExpressions:
|
||||
- key: v1.min.io/tenant
|
||||
operator: In
|
||||
values:
|
||||
- minio
|
||||
- key: v1.min.io/pool
|
||||
operator: In
|
||||
values:
|
||||
- pool-0
|
||||
topologyKey: kubernetes.io/hostname
|
||||
resources:
|
||||
requests:
|
||||
cpu: '1'
|
||||
memory: 512Mi
|
||||
servers: 4
|
||||
volumesPerServer: 1
|
||||
volumeClaimTemplate:
|
||||
metadata:
|
||||
name: data
|
||||
spec:
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
resources:
|
||||
requests:
|
||||
storage: '30Gi'
|
||||
storageClassName: local-path
|
||||
status: {}
|
||||
nodeSelector:
|
||||
dedicated: storage
|
||||
tolerations:
|
||||
- key: dedicated
|
||||
operator: Equal
|
||||
value: storage
|
||||
effect: NoSchedule
|
@ -77,11 +77,14 @@ steps:
|
||||
- echo "ENV GIT_COMMIT_TIMESTAMP=$(git log -1 --format=%cd --date=iso-strict)" >> Dockerfile
|
||||
- cat Dockerfile
|
||||
- name: docker
|
||||
image: harbor.k-space.ee/k-space/drone-kaniko
|
||||
image: plugins/docker
|
||||
settings:
|
||||
repo: ${DRONE_REPO}
|
||||
repo: harbor.k-space.ee/${DRONE_REPO}
|
||||
tags: latest-arm64
|
||||
registry: harbor.k-space.ee
|
||||
squash: true
|
||||
experimental: true
|
||||
mtu: 1300
|
||||
username:
|
||||
from_secret: docker_username
|
||||
password:
|
||||
@ -106,11 +109,14 @@ steps:
|
||||
- echo "ENV GIT_COMMIT_TIMESTAMP=$(git log -1 --format=%cd --date=iso-strict)" >> Dockerfile
|
||||
- cat Dockerfile
|
||||
- name: docker
|
||||
image: harbor.k-space.ee/k-space/drone-kaniko
|
||||
image: plugins/docker
|
||||
settings:
|
||||
repo: ${DRONE_REPO}
|
||||
repo: harbor.k-space.ee/${DRONE_REPO}
|
||||
tags: latest-amd64
|
||||
registry: harbor.k-space.ee
|
||||
squash: true
|
||||
experimental: true
|
||||
mtu: 1300
|
||||
storage_driver: vfs
|
||||
username:
|
||||
from_secret: docker_username
|
||||
@ -124,8 +130,8 @@ steps:
|
||||
- name: manifest
|
||||
image: plugins/manifest
|
||||
settings:
|
||||
target: ${DRONE_REPO}:latest
|
||||
template: ${DRONE_REPO}:latest-ARCH
|
||||
target: harbor.k-space.ee/${DRONE_REPO}:latest
|
||||
template: harbor.k-space.ee/${DRONE_REPO}:latest-ARCH
|
||||
platforms:
|
||||
- linux/amd64
|
||||
- linux/arm64
|
||||
|
@ -83,6 +83,7 @@ kind: Ingress
|
||||
metadata:
|
||||
name: drone
|
||||
annotations:
|
||||
cert-manager.io/cluster-issuer: default
|
||||
external-dns.alpha.kubernetes.io/target: traefik.k-space.ee
|
||||
kubernetes.io/ingress.class: traefik
|
||||
traefik.ingress.kubernetes.io/router.entrypoints: websecure
|
||||
@ -90,7 +91,8 @@ metadata:
|
||||
spec:
|
||||
tls:
|
||||
- hosts:
|
||||
- "*.k-space.ee"
|
||||
- "drone.k-space.ee"
|
||||
secretName: drone-tls
|
||||
rules:
|
||||
- host: "drone.k-space.ee"
|
||||
http:
|
||||
|
@ -1,7 +1,7 @@
|
||||
# elastic-operator
|
||||
|
||||
```
|
||||
wget https://download.elastic.co/downloads/eck/2.4.0/crds.yaml
|
||||
wget https://download.elastic.co/downloads/eck/2.4.0/operator.yaml
|
||||
wget https://download.elastic.co/downloads/eck/2.2.0/crds.yaml
|
||||
wget https://download.elastic.co/downloads/eck/2.2.0/operator.yaml
|
||||
kubectl apply -n elastic-system -f application.yml -f crds.yaml -f operator.yaml
|
||||
```
|
||||
|
@ -1,16 +1,15 @@
|
||||
---
|
||||
apiVersion: beat.k8s.elastic.co/v1beta1
|
||||
kind: Beat
|
||||
metadata:
|
||||
name: filebeat
|
||||
spec:
|
||||
type: filebeat
|
||||
version: 8.4.3
|
||||
version: 8.4.1
|
||||
elasticsearchRef:
|
||||
name: elasticsearch
|
||||
kibanaRef:
|
||||
name: kibana
|
||||
config:
|
||||
logging:
|
||||
level: warning
|
||||
http:
|
||||
enabled: true
|
||||
port: 5066
|
||||
@ -25,15 +24,50 @@ spec:
|
||||
type: container
|
||||
paths:
|
||||
- /var/log/containers/*${data.kubernetes.container.id}.log
|
||||
processors:
|
||||
- drop_fields:
|
||||
fields:
|
||||
- stream
|
||||
- target
|
||||
- host
|
||||
ignore_missing: true
|
||||
- rename:
|
||||
fields:
|
||||
- from: "kubernetes.node.name"
|
||||
to: "host"
|
||||
- from: "kubernetes.pod.name"
|
||||
to: "pod"
|
||||
- from: "kubernetes.labels.app"
|
||||
to: "app"
|
||||
- from: "kubernetes.namespace"
|
||||
to: "namespace"
|
||||
ignore_missing: true
|
||||
- drop_fields:
|
||||
fields:
|
||||
- input
|
||||
- agent
|
||||
- container
|
||||
- ecs
|
||||
- host
|
||||
- kubernetes
|
||||
- log
|
||||
- "@metadata"
|
||||
ignore_missing: true
|
||||
- decode_json_fields:
|
||||
fields:
|
||||
- message
|
||||
max_depth: 2
|
||||
expand_keys: true
|
||||
target: ""
|
||||
add_error_key: true
|
||||
daemonSet:
|
||||
podTemplate:
|
||||
metadata:
|
||||
annotations:
|
||||
co.elastic.logs/enabled: 'false'
|
||||
spec:
|
||||
serviceAccountName: filebeat
|
||||
automountServiceAccountToken: true
|
||||
terminationGracePeriodSeconds: 30
|
||||
dnsPolicy: ClusterFirstWithHostNet
|
||||
hostNetwork: true # Allows to provide richer host metadata
|
||||
containers:
|
||||
- name: filebeat
|
||||
securityContext:
|
||||
@ -50,12 +84,6 @@ spec:
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: spec.nodeName
|
||||
resources:
|
||||
limits:
|
||||
memory: 200Mi
|
||||
requests:
|
||||
cpu: 100m
|
||||
memory: 100Mi
|
||||
- name: exporter
|
||||
image: sepa/beats-exporter
|
||||
args:
|
||||
@ -80,104 +108,6 @@ spec:
|
||||
- operator: "Exists"
|
||||
effect: "NoSchedule"
|
||||
---
|
||||
apiVersion: beat.k8s.elastic.co/v1beta1
|
||||
kind: Beat
|
||||
metadata:
|
||||
name: filebeat-syslog
|
||||
spec:
|
||||
type: filebeat
|
||||
version: 8.4.3
|
||||
elasticsearchRef:
|
||||
name: elasticsearch
|
||||
config:
|
||||
logging:
|
||||
level: warning
|
||||
http:
|
||||
enabled: true
|
||||
port: 5066
|
||||
filebeat:
|
||||
inputs:
|
||||
- type: syslog
|
||||
format: rfc5424
|
||||
protocol.udp:
|
||||
host: "0.0.0.0:1514"
|
||||
- type: syslog
|
||||
format: rfc5424
|
||||
protocol.tcp:
|
||||
host: "0.0.0.0:1514"
|
||||
deployment:
|
||||
replicas: 2
|
||||
podTemplate:
|
||||
metadata:
|
||||
annotations:
|
||||
co.elastic.logs/enabled: 'false'
|
||||
spec:
|
||||
terminationGracePeriodSeconds: 30
|
||||
containers:
|
||||
- name: filebeat
|
||||
resources:
|
||||
limits:
|
||||
memory: 200Mi
|
||||
requests:
|
||||
cpu: 100m
|
||||
memory: 100Mi
|
||||
ports:
|
||||
- containerPort: 1514
|
||||
name: syslog
|
||||
protocol: UDP
|
||||
volumeMounts:
|
||||
- name: filebeat-registry
|
||||
mountPath: /usr/share/filebeat/data
|
||||
- name: exporter
|
||||
image: sepa/beats-exporter
|
||||
args:
|
||||
- -p=5066
|
||||
ports:
|
||||
- containerPort: 8080
|
||||
name: exporter
|
||||
protocol: TCP
|
||||
volumes:
|
||||
- name: filebeat-registry
|
||||
emptyDir: {}
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: filebeat-syslog-udp
|
||||
annotations:
|
||||
external-dns.alpha.kubernetes.io/hostname: syslog.k-space.ee
|
||||
metallb.universe.tf/allow-shared-ip: syslog.k-space.ee
|
||||
spec:
|
||||
type: LoadBalancer
|
||||
externalTrafficPolicy: Local
|
||||
loadBalancerIP: 172.20.51.4
|
||||
ports:
|
||||
- name: filebeat-syslog
|
||||
port: 514
|
||||
protocol: UDP
|
||||
targetPort: 1514
|
||||
selector:
|
||||
beat.k8s.elastic.co/name: filebeat-syslog
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: filebeat-syslog-tcp
|
||||
annotations:
|
||||
external-dns.alpha.kubernetes.io/hostname: syslog.k-space.ee
|
||||
metallb.universe.tf/allow-shared-ip: syslog.k-space.ee
|
||||
spec:
|
||||
type: LoadBalancer
|
||||
externalTrafficPolicy: Local
|
||||
loadBalancerIP: 172.20.51.4
|
||||
ports:
|
||||
- name: filebeat-syslog
|
||||
port: 514
|
||||
protocol: TCP
|
||||
targetPort: 1514
|
||||
selector:
|
||||
beat.k8s.elastic.co/name: filebeat-syslog
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRole
|
||||
metadata:
|
||||
@ -218,10 +148,12 @@ kind: Elasticsearch
|
||||
metadata:
|
||||
name: elasticsearch
|
||||
spec:
|
||||
version: 8.4.3
|
||||
version: 8.4.1
|
||||
nodeSets:
|
||||
- name: default
|
||||
count: 1
|
||||
count: 3
|
||||
config:
|
||||
node.store.allow_mmap: false
|
||||
volumeClaimTemplates:
|
||||
- metadata:
|
||||
name: elasticsearch-data
|
||||
@ -231,7 +163,7 @@ spec:
|
||||
resources:
|
||||
requests:
|
||||
storage: 5Gi
|
||||
storageClassName: longhorn
|
||||
storageClassName: local-path
|
||||
http:
|
||||
tls:
|
||||
selfSignedCertificate:
|
||||
@ -242,8 +174,8 @@ kind: Kibana
|
||||
metadata:
|
||||
name: kibana
|
||||
spec:
|
||||
version: 8.4.3
|
||||
count: 1
|
||||
version: 8.4.1
|
||||
count: 2
|
||||
elasticsearchRef:
|
||||
name: elasticsearch
|
||||
http:
|
||||
@ -264,23 +196,6 @@ spec:
|
||||
entries:
|
||||
- key: elastic
|
||||
path: xpack.security.authc.providers.anonymous.anonymous1.credentials.password
|
||||
podTemplate:
|
||||
metadata:
|
||||
annotations:
|
||||
co.elastic.logs/enabled: 'false'
|
||||
spec:
|
||||
containers:
|
||||
- name: kibana
|
||||
readinessProbe:
|
||||
httpGet:
|
||||
path: /app/home
|
||||
port: 5601
|
||||
scheme: HTTP
|
||||
initialDelaySeconds: 10
|
||||
timeoutSeconds: 5
|
||||
periodSeconds: 10
|
||||
successThreshold: 1
|
||||
failureThreshold: 3
|
||||
---
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: Ingress
|
||||
@ -288,6 +203,7 @@ metadata:
|
||||
name: kibana
|
||||
annotations:
|
||||
kubernetes.io/ingress.class: traefik
|
||||
cert-manager.io/cluster-issuer: default
|
||||
traefik.ingress.kubernetes.io/router.entrypoints: websecure
|
||||
traefik.ingress.kubernetes.io/router.middlewares: traefik-sso@kubernetescrd
|
||||
traefik.ingress.kubernetes.io/router.tls: "true"
|
||||
@ -306,26 +222,5 @@ spec:
|
||||
number: 5601
|
||||
tls:
|
||||
- hosts:
|
||||
- "*.k-space.ee"
|
||||
---
|
||||
apiVersion: monitoring.coreos.com/v1
|
||||
kind: PodMonitor
|
||||
metadata:
|
||||
name: filebeat
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
beat.k8s.elastic.co/name: filebeat
|
||||
podMetricsEndpoints:
|
||||
- port: exporter
|
||||
---
|
||||
apiVersion: monitoring.coreos.com/v1
|
||||
kind: PodMonitor
|
||||
metadata:
|
||||
name: elasticsearch
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
app.kubernetes.io/name: elasticsearch-exporter
|
||||
podMetricsEndpoints:
|
||||
- port: exporter
|
||||
- kibana.k-space.ee
|
||||
secretName: kibana-tls
|
||||
|
@ -3,12 +3,12 @@ apiVersion: apiextensions.k8s.io/v1
|
||||
kind: CustomResourceDefinition
|
||||
metadata:
|
||||
annotations:
|
||||
controller-gen.kubebuilder.io/version: v0.9.1
|
||||
controller-gen.kubebuilder.io/version: v0.8.0
|
||||
creationTimestamp: null
|
||||
labels:
|
||||
app.kubernetes.io/instance: 'elastic-operator'
|
||||
app.kubernetes.io/name: 'eck-operator-crds'
|
||||
app.kubernetes.io/version: '2.4.0'
|
||||
app.kubernetes.io/version: '2.2.0'
|
||||
name: agents.agent.k8s.elastic.co
|
||||
spec:
|
||||
group: agent.k8s.elastic.co
|
||||
@ -203,7 +203,7 @@ spec:
|
||||
description: Spec is the specification of the service.
|
||||
properties:
|
||||
allocateLoadBalancerNodePorts:
|
||||
description: allocateLoadBalancerNodePorts defines if NodePorts will be automatically allocated for services with type LoadBalancer. Default is "true". It may be set to "false" if the cluster load-balancer does not rely on NodePorts. If the caller requests specific NodePorts (by specifying a value), those requests will be respected, regardless of this field. This field may only be set for services with type LoadBalancer and will be cleared if the type is changed to any other type.
|
||||
description: allocateLoadBalancerNodePorts defines if NodePorts will be automatically allocated for services with type LoadBalancer. Default is "true". It may be set to "false" if the cluster load-balancer does not rely on NodePorts. If the caller requests specific NodePorts (by specifying a value), those requests will be respected, regardless of this field. This field may only be set for services with type LoadBalancer and will be cleared if the type is changed to any other type. This field is beta-level and is only honored by servers that enable the ServiceLBNodePortControl feature.
|
||||
type: boolean
|
||||
clusterIP:
|
||||
description: 'clusterIP is the IP address of the service and is usually assigned randomly. If an address is specified manually, is in-range (as per system configuration), and is not in use, it will be allocated to the service; otherwise creation of the service will fail. This field may not be changed through updates unless the type field is also being changed to ExternalName (which requires this field to be blank) or the type field is being changed from ExternalName (in which case this field may optionally be specified, as describe above). Valid values are "None", empty string (""), or a valid IP address. Setting this to "None" makes a "headless service" (no virtual IP), which is useful when direct endpoint connections are preferred and proxying is not required. Only applies to types ClusterIP, NodePort, and LoadBalancer. If this field is specified when creating a Service of type ExternalName, creation will fail. This field will be wiped when updating a Service to type ExternalName. More info: https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies'
|
||||
@ -246,7 +246,7 @@ spec:
|
||||
description: loadBalancerClass is the class of the load balancer implementation this Service belongs to. If specified, the value of this field must be a label-style identifier, with an optional prefix, e.g. "internal-vip" or "example.com/internal-vip". Unprefixed names are reserved for end-users. This field can only be set when the Service type is 'LoadBalancer'. If not set, the default load balancer implementation is used, today this is typically done through the cloud provider integration, but should apply for any default implementation. If set, it is assumed that a load balancer implementation is watching for Services with a matching class. Any default load balancer implementation (e.g. cloud providers) should ignore Services that set this field. This field can only be set when creating or updating a Service to type 'LoadBalancer'. Once set, it can not be changed. This field will be wiped when a service is updated to a non 'LoadBalancer' type.
|
||||
type: string
|
||||
loadBalancerIP:
|
||||
description: 'Only applies to Service Type: LoadBalancer. This feature depends on whether the underlying cloud-provider supports specifying the loadBalancerIP when a load balancer is created. This field will be ignored if the cloud-provider does not support the feature. Deprecated: This field was under-specified and its meaning varies across implementations, and it cannot support dual-stack. As of Kubernetes v1.24, users are encouraged to use implementation-specific annotations when available. This field may be removed in a future API version.'
|
||||
description: 'Only applies to Service Type: LoadBalancer LoadBalancer will get created with the IP specified in this field. This feature depends on whether the underlying cloud-provider supports specifying the loadBalancerIP when a load balancer is created. This field will be ignored if the cloud-provider does not support the feature.'
|
||||
type: string
|
||||
loadBalancerSourceRanges:
|
||||
description: 'If specified and supported by the platform, this will restrict traffic through the cloud-provider load-balancer will be restricted to the specified client IPs. This field will be ignored if the cloud-provider does not support the feature." More info: https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/'
|
||||
@ -259,7 +259,7 @@ spec:
|
||||
description: ServicePort contains information on service's port.
|
||||
properties:
|
||||
appProtocol:
|
||||
description: The application protocol for this port. This field follows standard Kubernetes label syntax. Un-prefixed names are reserved for IANA standard service names (as per RFC-6335 and https://www.iana.org/assignments/service-names). Non-standard protocols should use prefixed names such as mycompany.com/my-custom-protocol.
|
||||
description: The application protocol for this port. This field follows standard Kubernetes label syntax. Un-prefixed names are reserved for IANA standard service names (as per RFC-6335 and http://www.iana.org/assignments/service-names). Non-standard protocols should use prefixed names such as mycompany.com/my-custom-protocol.
|
||||
type: string
|
||||
name:
|
||||
description: The name of this port within the service. This must be a DNS_LABEL. All ports within a ServiceSpec must have unique names. When considering the endpoints for a Service, this must match the 'name' field in the EndpointPort. Optional if only one ServicePort is defined on this service.
|
||||
@ -376,13 +376,6 @@ spec:
|
||||
- standalone
|
||||
- fleet
|
||||
type: string
|
||||
policyID:
|
||||
description: PolicyID optionally determines into which Agent Policy this Agent will be enrolled. If left empty the default policy will be used.
|
||||
type: string
|
||||
revisionHistoryLimit:
|
||||
description: RevisionHistoryLimit is the number of revisions to retain to allow rollback in the underlying DaemonSet or Deployment.
|
||||
format: int32
|
||||
type: integer
|
||||
secureSettings:
|
||||
description: SecureSettings is a list of references to Kubernetes Secrets containing sensitive configuration options for the Agent. Secrets data can be then referenced in the Agent config using the Secret's keys or as specified in `Entries` field of each SecureSetting.
|
||||
items:
|
||||
@ -455,18 +448,24 @@ spec:
|
||||
storage: true
|
||||
subresources:
|
||||
status: {}
|
||||
status:
|
||||
acceptedNames:
|
||||
kind: ""
|
||||
plural: ""
|
||||
conditions: []
|
||||
storedVersions: []
|
||||
---
|
||||
# Source: eck-operator-crds/templates/all-crds.yaml
|
||||
apiVersion: apiextensions.k8s.io/v1
|
||||
kind: CustomResourceDefinition
|
||||
metadata:
|
||||
annotations:
|
||||
controller-gen.kubebuilder.io/version: v0.9.1
|
||||
controller-gen.kubebuilder.io/version: v0.8.0
|
||||
creationTimestamp: null
|
||||
labels:
|
||||
app.kubernetes.io/instance: 'elastic-operator'
|
||||
app.kubernetes.io/name: 'eck-operator-crds'
|
||||
app.kubernetes.io/version: '2.4.0'
|
||||
app.kubernetes.io/version: '2.2.0'
|
||||
name: apmservers.apm.k8s.elastic.co
|
||||
spec:
|
||||
group: apm.k8s.elastic.co
|
||||
@ -566,7 +565,7 @@ spec:
|
||||
description: Spec is the specification of the service.
|
||||
properties:
|
||||
allocateLoadBalancerNodePorts:
|
||||
description: allocateLoadBalancerNodePorts defines if NodePorts will be automatically allocated for services with type LoadBalancer. Default is "true". It may be set to "false" if the cluster load-balancer does not rely on NodePorts. If the caller requests specific NodePorts (by specifying a value), those requests will be respected, regardless of this field. This field may only be set for services with type LoadBalancer and will be cleared if the type is changed to any other type.
|
||||
description: allocateLoadBalancerNodePorts defines if NodePorts will be automatically allocated for services with type LoadBalancer. Default is "true". It may be set to "false" if the cluster load-balancer does not rely on NodePorts. If the caller requests specific NodePorts (by specifying a value), those requests will be respected, regardless of this field. This field may only be set for services with type LoadBalancer and will be cleared if the type is changed to any other type. This field is beta-level and is only honored by servers that enable the ServiceLBNodePortControl feature.
|
||||
type: boolean
|
||||
clusterIP:
|
||||
description: 'clusterIP is the IP address of the service and is usually assigned randomly. If an address is specified manually, is in-range (as per system configuration), and is not in use, it will be allocated to the service; otherwise creation of the service will fail. This field may not be changed through updates unless the type field is also being changed to ExternalName (which requires this field to be blank) or the type field is being changed from ExternalName (in which case this field may optionally be specified, as describe above). Valid values are "None", empty string (""), or a valid IP address. Setting this to "None" makes a "headless service" (no virtual IP), which is useful when direct endpoint connections are preferred and proxying is not required. Only applies to types ClusterIP, NodePort, and LoadBalancer. If this field is specified when creating a Service of type ExternalName, creation will fail. This field will be wiped when updating a Service to type ExternalName. More info: https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies'
|
||||
@ -609,7 +608,7 @@ spec:
|
||||
description: loadBalancerClass is the class of the load balancer implementation this Service belongs to. If specified, the value of this field must be a label-style identifier, with an optional prefix, e.g. "internal-vip" or "example.com/internal-vip". Unprefixed names are reserved for end-users. This field can only be set when the Service type is 'LoadBalancer'. If not set, the default load balancer implementation is used, today this is typically done through the cloud provider integration, but should apply for any default implementation. If set, it is assumed that a load balancer implementation is watching for Services with a matching class. Any default load balancer implementation (e.g. cloud providers) should ignore Services that set this field. This field can only be set when creating or updating a Service to type 'LoadBalancer'. Once set, it can not be changed. This field will be wiped when a service is updated to a non 'LoadBalancer' type.
|
||||
type: string
|
||||
loadBalancerIP:
|
||||
description: 'Only applies to Service Type: LoadBalancer. This feature depends on whether the underlying cloud-provider supports specifying the loadBalancerIP when a load balancer is created. This field will be ignored if the cloud-provider does not support the feature. Deprecated: This field was under-specified and its meaning varies across implementations, and it cannot support dual-stack. As of Kubernetes v1.24, users are encouraged to use implementation-specific annotations when available. This field may be removed in a future API version.'
|
||||
description: 'Only applies to Service Type: LoadBalancer LoadBalancer will get created with the IP specified in this field. This feature depends on whether the underlying cloud-provider supports specifying the loadBalancerIP when a load balancer is created. This field will be ignored if the cloud-provider does not support the feature.'
|
||||
type: string
|
||||
loadBalancerSourceRanges:
|
||||
description: 'If specified and supported by the platform, this will restrict traffic through the cloud-provider load-balancer will be restricted to the specified client IPs. This field will be ignored if the cloud-provider does not support the feature." More info: https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/'
|
||||
@ -622,7 +621,7 @@ spec:
|
||||
description: ServicePort contains information on service's port.
|
||||
properties:
|
||||
appProtocol:
|
||||
description: The application protocol for this port. This field follows standard Kubernetes label syntax. Un-prefixed names are reserved for IANA standard service names (as per RFC-6335 and https://www.iana.org/assignments/service-names). Non-standard protocols should use prefixed names such as mycompany.com/my-custom-protocol.
|
||||
description: The application protocol for this port. This field follows standard Kubernetes label syntax. Un-prefixed names are reserved for IANA standard service names (as per RFC-6335 and http://www.iana.org/assignments/service-names). Non-standard protocols should use prefixed names such as mycompany.com/my-custom-protocol.
|
||||
type: string
|
||||
name:
|
||||
description: The name of this port within the service. This must be a DNS_LABEL. All ports within a ServiceSpec must have unique names. When considering the endpoints for a Service, this must match the 'name' field in the EndpointPort. Optional if only one ServicePort is defined on this service.
|
||||
@ -737,10 +736,6 @@ spec:
|
||||
description: PodTemplate provides customisation options (labels, annotations, affinity rules, resource requests, and so on) for the APM Server pods.
|
||||
type: object
|
||||
x-kubernetes-preserve-unknown-fields: true
|
||||
revisionHistoryLimit:
|
||||
description: RevisionHistoryLimit is the number of revisions to retain to allow rollback in the underlying Deployment.
|
||||
format: int32
|
||||
type: integer
|
||||
secureSettings:
|
||||
description: SecureSettings is a list of references to Kubernetes secrets containing sensitive configuration options for APM Server.
|
||||
items:
|
||||
@ -797,10 +792,6 @@ spec:
|
||||
kibanaAssociationStatus:
|
||||
description: KibanaAssociationStatus is the status of any auto-linking to Kibana.
|
||||
type: string
|
||||
observedGeneration:
|
||||
description: ObservedGeneration represents the .metadata.generation that the status is based upon. It corresponds to the metadata generation, which is updated on mutation by the API Server. If the generation observed in status diverges from the generation in metadata, the APM Server controller has not yet processed the changes contained in the APM Server specification.
|
||||
format: int64
|
||||
type: integer
|
||||
secretTokenSecret:
|
||||
description: SecretTokenSecretName is the name of the Secret that contains the secret token
|
||||
type: string
|
||||
@ -904,7 +895,7 @@ spec:
|
||||
description: Spec is the specification of the service.
|
||||
properties:
|
||||
allocateLoadBalancerNodePorts:
|
||||
description: allocateLoadBalancerNodePorts defines if NodePorts will be automatically allocated for services with type LoadBalancer. Default is "true". It may be set to "false" if the cluster load-balancer does not rely on NodePorts. If the caller requests specific NodePorts (by specifying a value), those requests will be respected, regardless of this field. This field may only be set for services with type LoadBalancer and will be cleared if the type is changed to any other type.
|
||||
description: allocateLoadBalancerNodePorts defines if NodePorts will be automatically allocated for services with type LoadBalancer. Default is "true". It may be set to "false" if the cluster load-balancer does not rely on NodePorts. If the caller requests specific NodePorts (by specifying a value), those requests will be respected, regardless of this field. This field may only be set for services with type LoadBalancer and will be cleared if the type is changed to any other type. This field is beta-level and is only honored by servers that enable the ServiceLBNodePortControl feature.
|
||||
type: boolean
|
||||
clusterIP:
|
||||
description: 'clusterIP is the IP address of the service and is usually assigned randomly. If an address is specified manually, is in-range (as per system configuration), and is not in use, it will be allocated to the service; otherwise creation of the service will fail. This field may not be changed through updates unless the type field is also being changed to ExternalName (which requires this field to be blank) or the type field is being changed from ExternalName (in which case this field may optionally be specified, as describe above). Valid values are "None", empty string (""), or a valid IP address. Setting this to "None" makes a "headless service" (no virtual IP), which is useful when direct endpoint connections are preferred and proxying is not required. Only applies to types ClusterIP, NodePort, and LoadBalancer. If this field is specified when creating a Service of type ExternalName, creation will fail. This field will be wiped when updating a Service to type ExternalName. More info: https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies'
|
||||
@ -947,7 +938,7 @@ spec:
|
||||
description: loadBalancerClass is the class of the load balancer implementation this Service belongs to. If specified, the value of this field must be a label-style identifier, with an optional prefix, e.g. "internal-vip" or "example.com/internal-vip". Unprefixed names are reserved for end-users. This field can only be set when the Service type is 'LoadBalancer'. If not set, the default load balancer implementation is used, today this is typically done through the cloud provider integration, but should apply for any default implementation. If set, it is assumed that a load balancer implementation is watching for Services with a matching class. Any default load balancer implementation (e.g. cloud providers) should ignore Services that set this field. This field can only be set when creating or updating a Service to type 'LoadBalancer'. Once set, it can not be changed. This field will be wiped when a service is updated to a non 'LoadBalancer' type.
|
||||
type: string
|
||||
loadBalancerIP:
|
||||
description: 'Only applies to Service Type: LoadBalancer. This feature depends on whether the underlying cloud-provider supports specifying the loadBalancerIP when a load balancer is created. This field will be ignored if the cloud-provider does not support the feature. Deprecated: This field was under-specified and its meaning varies across implementations, and it cannot support dual-stack. As of Kubernetes v1.24, users are encouraged to use implementation-specific annotations when available. This field may be removed in a future API version.'
|
||||
description: 'Only applies to Service Type: LoadBalancer LoadBalancer will get created with the IP specified in this field. This feature depends on whether the underlying cloud-provider supports specifying the loadBalancerIP when a load balancer is created. This field will be ignored if the cloud-provider does not support the feature.'
|
||||
type: string
|
||||
loadBalancerSourceRanges:
|
||||
description: 'If specified and supported by the platform, this will restrict traffic through the cloud-provider load-balancer will be restricted to the specified client IPs. This field will be ignored if the cloud-provider does not support the feature." More info: https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/'
|
||||
@ -960,7 +951,7 @@ spec:
|
||||
description: ServicePort contains information on service's port.
|
||||
properties:
|
||||
appProtocol:
|
||||
description: The application protocol for this port. This field follows standard Kubernetes label syntax. Un-prefixed names are reserved for IANA standard service names (as per RFC-6335 and https://www.iana.org/assignments/service-names). Non-standard protocols should use prefixed names such as mycompany.com/my-custom-protocol.
|
||||
description: The application protocol for this port. This field follows standard Kubernetes label syntax. Un-prefixed names are reserved for IANA standard service names (as per RFC-6335 and http://www.iana.org/assignments/service-names). Non-standard protocols should use prefixed names such as mycompany.com/my-custom-protocol.
|
||||
type: string
|
||||
name:
|
||||
description: The name of this port within the service. This must be a DNS_LABEL. All ports within a ServiceSpec must have unique names. When considering the endpoints for a Service, this must match the 'name' field in the EndpointPort. Optional if only one ServicePort is defined on this service.
|
||||
@ -1121,18 +1112,24 @@ spec:
|
||||
type: object
|
||||
served: false
|
||||
storage: false
|
||||
status:
|
||||
acceptedNames:
|
||||
kind: ""
|
||||
plural: ""
|
||||
conditions: []
|
||||
storedVersions: []
|
||||
---
|
||||
# Source: eck-operator-crds/templates/all-crds.yaml
|
||||
apiVersion: apiextensions.k8s.io/v1
|
||||
kind: CustomResourceDefinition
|
||||
metadata:
|
||||
annotations:
|
||||
controller-gen.kubebuilder.io/version: v0.9.1
|
||||
controller-gen.kubebuilder.io/version: v0.8.0
|
||||
creationTimestamp: null
|
||||
labels:
|
||||
app.kubernetes.io/instance: 'elastic-operator'
|
||||
app.kubernetes.io/name: 'eck-operator-crds'
|
||||
app.kubernetes.io/version: '2.4.0'
|
||||
app.kubernetes.io/version: '2.2.0'
|
||||
name: beats.beat.k8s.elastic.co
|
||||
spec:
|
||||
group: beat.k8s.elastic.co
|
||||
@ -1297,10 +1294,6 @@ spec:
|
||||
description: ServiceName is the name of an existing Kubernetes service which is used to make requests to the referenced object. It has to be in the same namespace as the referenced resource. If left empty, the default HTTP service of the referenced resource is used.
|
||||
type: string
|
||||
type: object
|
||||
revisionHistoryLimit:
|
||||
description: RevisionHistoryLimit is the number of revisions to retain to allow rollback in the underlying DaemonSet or Deployment.
|
||||
format: int32
|
||||
type: integer
|
||||
secureSettings:
|
||||
description: SecureSettings is a list of references to Kubernetes Secrets containing sensitive configuration options for the Beat. Secrets data can be then referenced in the Beat config using the Secret's keys or as specified in `Entries` field of each SecureSetting.
|
||||
items:
|
||||
@ -1360,10 +1353,6 @@ spec:
|
||||
kibanaAssociationStatus:
|
||||
description: AssociationStatus is the status of an association resource.
|
||||
type: string
|
||||
observedGeneration:
|
||||
description: ObservedGeneration represents the .metadata.generation that the status is based upon. It corresponds to the metadata generation, which is updated on mutation by the API Server. If the generation observed in status diverges from the generation in metadata, the Beats controller has not yet processed the changes contained in the Beats specification.
|
||||
format: int64
|
||||
type: integer
|
||||
version:
|
||||
description: 'Version of the stack resource currently running. During version upgrades, multiple versions may run in parallel: this value specifies the lowest version currently running.'
|
||||
type: string
|
||||
@ -1373,18 +1362,24 @@ spec:
|
||||
storage: true
|
||||
subresources:
|
||||
status: {}
|
||||
status:
|
||||
acceptedNames:
|
||||
kind: ""
|
||||
plural: ""
|
||||
conditions: []
|
||||
storedVersions: []
|
||||
---
|
||||
# Source: eck-operator-crds/templates/all-crds.yaml
|
||||
apiVersion: apiextensions.k8s.io/v1
|
||||
kind: CustomResourceDefinition
|
||||
metadata:
|
||||
annotations:
|
||||
controller-gen.kubebuilder.io/version: v0.9.1
|
||||
controller-gen.kubebuilder.io/version: v0.8.0
|
||||
creationTimestamp: null
|
||||
labels:
|
||||
app.kubernetes.io/instance: 'elastic-operator'
|
||||
app.kubernetes.io/name: 'eck-operator-crds'
|
||||
app.kubernetes.io/version: '2.4.0'
|
||||
app.kubernetes.io/version: '2.2.0'
|
||||
name: elasticmapsservers.maps.k8s.elastic.co
|
||||
spec:
|
||||
group: maps.k8s.elastic.co
|
||||
@ -1491,7 +1486,7 @@ spec:
|
||||
description: Spec is the specification of the service.
|
||||
properties:
|
||||
allocateLoadBalancerNodePorts:
|
||||
description: allocateLoadBalancerNodePorts defines if NodePorts will be automatically allocated for services with type LoadBalancer. Default is "true". It may be set to "false" if the cluster load-balancer does not rely on NodePorts. If the caller requests specific NodePorts (by specifying a value), those requests will be respected, regardless of this field. This field may only be set for services with type LoadBalancer and will be cleared if the type is changed to any other type.
|
||||
description: allocateLoadBalancerNodePorts defines if NodePorts will be automatically allocated for services with type LoadBalancer. Default is "true". It may be set to "false" if the cluster load-balancer does not rely on NodePorts. If the caller requests specific NodePorts (by specifying a value), those requests will be respected, regardless of this field. This field may only be set for services with type LoadBalancer and will be cleared if the type is changed to any other type. This field is beta-level and is only honored by servers that enable the ServiceLBNodePortControl feature.
|
||||
type: boolean
|
||||
clusterIP:
|
||||
description: 'clusterIP is the IP address of the service and is usually assigned randomly. If an address is specified manually, is in-range (as per system configuration), and is not in use, it will be allocated to the service; otherwise creation of the service will fail. This field may not be changed through updates unless the type field is also being changed to ExternalName (which requires this field to be blank) or the type field is being changed from ExternalName (in which case this field may optionally be specified, as describe above). Valid values are "None", empty string (""), or a valid IP address. Setting this to "None" makes a "headless service" (no virtual IP), which is useful when direct endpoint connections are preferred and proxying is not required. Only applies to types ClusterIP, NodePort, and LoadBalancer. If this field is specified when creating a Service of type ExternalName, creation will fail. This field will be wiped when updating a Service to type ExternalName. More info: https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies'
|
||||
@ -1534,7 +1529,7 @@ spec:
|
||||
description: loadBalancerClass is the class of the load balancer implementation this Service belongs to. If specified, the value of this field must be a label-style identifier, with an optional prefix, e.g. "internal-vip" or "example.com/internal-vip". Unprefixed names are reserved for end-users. This field can only be set when the Service type is 'LoadBalancer'. If not set, the default load balancer implementation is used, today this is typically done through the cloud provider integration, but should apply for any default implementation. If set, it is assumed that a load balancer implementation is watching for Services with a matching class. Any default load balancer implementation (e.g. cloud providers) should ignore Services that set this field. This field can only be set when creating or updating a Service to type 'LoadBalancer'. Once set, it can not be changed. This field will be wiped when a service is updated to a non 'LoadBalancer' type.
|
||||
type: string
|
||||
loadBalancerIP:
|
||||
description: 'Only applies to Service Type: LoadBalancer. This feature depends on whether the underlying cloud-provider supports specifying the loadBalancerIP when a load balancer is created. This field will be ignored if the cloud-provider does not support the feature. Deprecated: This field was under-specified and its meaning varies across implementations, and it cannot support dual-stack. As of Kubernetes v1.24, users are encouraged to use implementation-specific annotations when available. This field may be removed in a future API version.'
|
||||
description: 'Only applies to Service Type: LoadBalancer LoadBalancer will get created with the IP specified in this field. This feature depends on whether the underlying cloud-provider supports specifying the loadBalancerIP when a load balancer is created. This field will be ignored if the cloud-provider does not support the feature.'
|
||||
type: string
|
||||
loadBalancerSourceRanges:
|
||||
description: 'If specified and supported by the platform, this will restrict traffic through the cloud-provider load-balancer will be restricted to the specified client IPs. This field will be ignored if the cloud-provider does not support the feature." More info: https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/'
|
||||
@ -1547,7 +1542,7 @@ spec:
|
||||
description: ServicePort contains information on service's port.
|
||||
properties:
|
||||
appProtocol:
|
||||
description: The application protocol for this port. This field follows standard Kubernetes label syntax. Un-prefixed names are reserved for IANA standard service names (as per RFC-6335 and https://www.iana.org/assignments/service-names). Non-standard protocols should use prefixed names such as mycompany.com/my-custom-protocol.
|
||||
description: The application protocol for this port. This field follows standard Kubernetes label syntax. Un-prefixed names are reserved for IANA standard service names (as per RFC-6335 and http://www.iana.org/assignments/service-names). Non-standard protocols should use prefixed names such as mycompany.com/my-custom-protocol.
|
||||
type: string
|
||||
name:
|
||||
description: The name of this port within the service. This must be a DNS_LABEL. All ports within a ServiceSpec must have unique names. When considering the endpoints for a Service, this must match the 'name' field in the EndpointPort. Optional if only one ServicePort is defined on this service.
|
||||
@ -1646,10 +1641,6 @@ spec:
|
||||
description: PodTemplate provides customisation options (labels, annotations, affinity rules, resource requests, and so on) for the Elastic Maps Server pods
|
||||
type: object
|
||||
x-kubernetes-preserve-unknown-fields: true
|
||||
revisionHistoryLimit:
|
||||
description: RevisionHistoryLimit is the number of revisions to retain to allow rollback in the underlying Deployment.
|
||||
format: int32
|
||||
type: integer
|
||||
serviceAccountName:
|
||||
description: ServiceAccountName is used to check access from the current resource to a resource (for ex. Elasticsearch) in a different namespace. Can only be used if ECK is enforcing RBAC on references.
|
||||
type: string
|
||||
@ -1676,10 +1667,6 @@ spec:
|
||||
health:
|
||||
description: Health of the deployment.
|
||||
type: string
|
||||
observedGeneration:
|
||||
description: ObservedGeneration is the most recent generation observed for this Elastic Maps Server. It corresponds to the metadata generation, which is updated on mutation by the API Server. If the generation observed in status diverges from the generation in metadata, the Elastic Maps controller has not yet processed the changes contained in the Elastic Maps specification.
|
||||
format: int64
|
||||
type: integer
|
||||
selector:
|
||||
description: Selector is the label selector used to find all pods.
|
||||
type: string
|
||||
@ -1696,18 +1683,24 @@ spec:
|
||||
specReplicasPath: .spec.count
|
||||
statusReplicasPath: .status.count
|
||||
status: {}
|
||||
status:
|
||||
acceptedNames:
|
||||
kind: ""
|
||||
plural: ""
|
||||
conditions: []
|
||||
storedVersions: []
|
||||
---
|
||||
# Source: eck-operator-crds/templates/all-crds.yaml
|
||||
apiVersion: apiextensions.k8s.io/v1
|
||||
kind: CustomResourceDefinition
|
||||
metadata:
|
||||
annotations:
|
||||
controller-gen.kubebuilder.io/version: v0.9.1
|
||||
controller-gen.kubebuilder.io/version: v0.8.0
|
||||
creationTimestamp: null
|
||||
labels:
|
||||
app.kubernetes.io/instance: 'elastic-operator'
|
||||
app.kubernetes.io/name: 'eck-operator-crds'
|
||||
app.kubernetes.io/version: '2.4.0'
|
||||
app.kubernetes.io/version: '2.2.0'
|
||||
name: elasticsearches.elasticsearch.k8s.elastic.co
|
||||
spec:
|
||||
group: elasticsearch.k8s.elastic.co
|
||||
@ -1810,7 +1803,7 @@ spec:
|
||||
description: Spec is the specification of the service.
|
||||
properties:
|
||||
allocateLoadBalancerNodePorts:
|
||||
description: allocateLoadBalancerNodePorts defines if NodePorts will be automatically allocated for services with type LoadBalancer. Default is "true". It may be set to "false" if the cluster load-balancer does not rely on NodePorts. If the caller requests specific NodePorts (by specifying a value), those requests will be respected, regardless of this field. This field may only be set for services with type LoadBalancer and will be cleared if the type is changed to any other type.
|
||||
description: allocateLoadBalancerNodePorts defines if NodePorts will be automatically allocated for services with type LoadBalancer. Default is "true". It may be set to "false" if the cluster load-balancer does not rely on NodePorts. If the caller requests specific NodePorts (by specifying a value), those requests will be respected, regardless of this field. This field may only be set for services with type LoadBalancer and will be cleared if the type is changed to any other type. This field is beta-level and is only honored by servers that enable the ServiceLBNodePortControl feature.
|
||||
type: boolean
|
||||
clusterIP:
|
||||
description: 'clusterIP is the IP address of the service and is usually assigned randomly. If an address is specified manually, is in-range (as per system configuration), and is not in use, it will be allocated to the service; otherwise creation of the service will fail. This field may not be changed through updates unless the type field is also being changed to ExternalName (which requires this field to be blank) or the type field is being changed from ExternalName (in which case this field may optionally be specified, as describe above). Valid values are "None", empty string (""), or a valid IP address. Setting this to "None" makes a "headless service" (no virtual IP), which is useful when direct endpoint connections are preferred and proxying is not required. Only applies to types ClusterIP, NodePort, and LoadBalancer. If this field is specified when creating a Service of type ExternalName, creation will fail. This field will be wiped when updating a Service to type ExternalName. More info: https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies'
|
||||
@ -1853,7 +1846,7 @@ spec:
|
||||
description: loadBalancerClass is the class of the load balancer implementation this Service belongs to. If specified, the value of this field must be a label-style identifier, with an optional prefix, e.g. "internal-vip" or "example.com/internal-vip". Unprefixed names are reserved for end-users. This field can only be set when the Service type is 'LoadBalancer'. If not set, the default load balancer implementation is used, today this is typically done through the cloud provider integration, but should apply for any default implementation. If set, it is assumed that a load balancer implementation is watching for Services with a matching class. Any default load balancer implementation (e.g. cloud providers) should ignore Services that set this field. This field can only be set when creating or updating a Service to type 'LoadBalancer'. Once set, it can not be changed. This field will be wiped when a service is updated to a non 'LoadBalancer' type.
|
||||
type: string
|
||||
loadBalancerIP:
|
||||
description: 'Only applies to Service Type: LoadBalancer. This feature depends on whether the underlying cloud-provider supports specifying the loadBalancerIP when a load balancer is created. This field will be ignored if the cloud-provider does not support the feature. Deprecated: This field was under-specified and its meaning varies across implementations, and it cannot support dual-stack. As of Kubernetes v1.24, users are encouraged to use implementation-specific annotations when available. This field may be removed in a future API version.'
|
||||
description: 'Only applies to Service Type: LoadBalancer LoadBalancer will get created with the IP specified in this field. This feature depends on whether the underlying cloud-provider supports specifying the loadBalancerIP when a load balancer is created. This field will be ignored if the cloud-provider does not support the feature.'
|
||||
type: string
|
||||
loadBalancerSourceRanges:
|
||||
description: 'If specified and supported by the platform, this will restrict traffic through the cloud-provider load-balancer will be restricted to the specified client IPs. This field will be ignored if the cloud-provider does not support the feature." More info: https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/'
|
||||
@ -1866,7 +1859,7 @@ spec:
|
||||
description: ServicePort contains information on service's port.
|
||||
properties:
|
||||
appProtocol:
|
||||
description: The application protocol for this port. This field follows standard Kubernetes label syntax. Un-prefixed names are reserved for IANA standard service names (as per RFC-6335 and https://www.iana.org/assignments/service-names). Non-standard protocols should use prefixed names such as mycompany.com/my-custom-protocol.
|
||||
description: The application protocol for this port. This field follows standard Kubernetes label syntax. Un-prefixed names are reserved for IANA standard service names (as per RFC-6335 and http://www.iana.org/assignments/service-names). Non-standard protocols should use prefixed names such as mycompany.com/my-custom-protocol.
|
||||
type: string
|
||||
name:
|
||||
description: The name of this port within the service. This must be a DNS_LABEL. All ports within a ServiceSpec must have unique names. When considering the endpoints for a Service, this must match the 'name' field in the EndpointPort. Optional if only one ServicePort is defined on this service.
|
||||
@ -2065,15 +2058,15 @@ spec:
|
||||
type: string
|
||||
type: object
|
||||
spec:
|
||||
description: 'spec defines the desired characteristics of a volume requested by a pod author. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims'
|
||||
description: 'Spec defines the desired characteristics of a volume requested by a pod author. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims'
|
||||
properties:
|
||||
accessModes:
|
||||
description: 'accessModes contains the desired access modes the volume should have. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1'
|
||||
description: 'AccessModes contains the desired access modes the volume should have. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1'
|
||||
items:
|
||||
type: string
|
||||
type: array
|
||||
dataSource:
|
||||
description: 'dataSource field can be used to specify either: * An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) * An existing PVC (PersistentVolumeClaim) If the provisioner or an external controller can support the specified data source, it will create a new volume based on the contents of the specified data source. If the AnyVolumeDataSource feature gate is enabled, this field will always have the same contents as the DataSourceRef field.'
|
||||
description: 'This field can be used to specify either: * An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) * An existing PVC (PersistentVolumeClaim) If the provisioner or an external controller can support the specified data source, it will create a new volume based on the contents of the specified data source. If the AnyVolumeDataSource feature gate is enabled, this field will always have the same contents as the DataSourceRef field.'
|
||||
properties:
|
||||
apiGroup:
|
||||
description: APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required.
|
||||
@ -2088,9 +2081,8 @@ spec:
|
||||
- kind
|
||||
- name
|
||||
type: object
|
||||
x-kubernetes-map-type: atomic
|
||||
dataSourceRef:
|
||||
description: 'dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any local object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the DataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, both fields (DataSource and DataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. There are two important differences between DataSource and DataSourceRef: * While DataSource only allows two specific types of objects, DataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While DataSource ignores disallowed values (dropping them), DataSourceRef preserves all values, and generates an error if a disallowed value is specified. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled.'
|
||||
description: 'Specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any local object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the DataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, both fields (DataSource and DataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. There are two important differences between DataSource and DataSourceRef: * While DataSource only allows two specific types of objects, DataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While DataSource ignores disallowed values (dropping them), DataSourceRef preserves all values, and generates an error if a disallowed value is specified. (Alpha) Using this field requires the AnyVolumeDataSource feature gate to be enabled.'
|
||||
properties:
|
||||
apiGroup:
|
||||
description: APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required.
|
||||
@ -2105,9 +2097,8 @@ spec:
|
||||
- kind
|
||||
- name
|
||||
type: object
|
||||
x-kubernetes-map-type: atomic
|
||||
resources:
|
||||
description: 'resources represents the minimum resources the volume should have. If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements that are lower than previous value but must still be higher than capacity recorded in the status field of the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources'
|
||||
description: 'Resources represents the minimum resources the volume should have. If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements that are lower than previous value but must still be higher than capacity recorded in the status field of the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources'
|
||||
properties:
|
||||
limits:
|
||||
additionalProperties:
|
||||
@ -2129,7 +2120,7 @@ spec:
|
||||
type: object
|
||||
type: object
|
||||
selector:
|
||||
description: selector is a label query over volumes to consider for binding.
|
||||
description: A label query over volumes to consider for binding.
|
||||
properties:
|
||||
matchExpressions:
|
||||
description: matchExpressions is a list of label selector requirements. The requirements are ANDed.
|
||||
@ -2158,22 +2149,21 @@ spec:
|
||||
description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
|
||||
type: object
|
||||
type: object
|
||||
x-kubernetes-map-type: atomic
|
||||
storageClassName:
|
||||
description: 'storageClassName is the name of the StorageClass required by the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1'
|
||||
description: 'Name of the StorageClass required by the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1'
|
||||
type: string
|
||||
volumeMode:
|
||||
description: volumeMode defines what type of volume is required by the claim. Value of Filesystem is implied when not included in claim spec.
|
||||
type: string
|
||||
volumeName:
|
||||
description: volumeName is the binding reference to the PersistentVolume backing this claim.
|
||||
description: VolumeName is the binding reference to the PersistentVolume backing this claim.
|
||||
type: string
|
||||
type: object
|
||||
status:
|
||||
description: 'status represents the current information/status of a persistent volume claim. Read-only. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims'
|
||||
description: 'Status represents the current information/status of a persistent volume claim. Read-only. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims'
|
||||
properties:
|
||||
accessModes:
|
||||
description: 'accessModes contains the actual access modes the volume backing the PVC has. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1'
|
||||
description: 'AccessModes contains the actual access modes the volume backing the PVC has. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1'
|
||||
items:
|
||||
type: string
|
||||
type: array
|
||||
@ -2184,7 +2174,7 @@ spec:
|
||||
- type: string
|
||||
pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
|
||||
x-kubernetes-int-or-string: true
|
||||
description: allocatedResources is the storage resource within AllocatedResources tracks the capacity allocated to a PVC. It may be larger than the actual capacity when a volume expansion operation is requested. For storage quota, the larger value from allocatedResources and PVC.spec.resources is used. If allocatedResources is not set, PVC.spec.resources alone is used for quota calculation. If a volume expansion capacity request is lowered, allocatedResources is only lowered if there are no expansion operations in progress and if the actual volume capacity is equal or lower than the requested capacity. This is an alpha field and requires enabling RecoverVolumeExpansionFailure feature.
|
||||
description: The storage resource within AllocatedResources tracks the capacity allocated to a PVC. It may be larger than the actual capacity when a volume expansion operation is requested. For storage quota, the larger value from allocatedResources and PVC.spec.resources is used. If allocatedResources is not set, PVC.spec.resources alone is used for quota calculation. If a volume expansion capacity request is lowered, allocatedResources is only lowered if there are no expansion operations in progress and if the actual volume capacity is equal or lower than the requested capacity. This is an alpha field and requires enabling RecoverVolumeExpansionFailure feature.
|
||||
type: object
|
||||
capacity:
|
||||
additionalProperties:
|
||||
@ -2193,26 +2183,26 @@ spec:
|
||||
- type: string
|
||||
pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
|
||||
x-kubernetes-int-or-string: true
|
||||
description: capacity represents the actual resources of the underlying volume.
|
||||
description: Represents the actual resources of the underlying volume.
|
||||
type: object
|
||||
conditions:
|
||||
description: conditions is the current Condition of persistent volume claim. If underlying persistent volume is being resized then the Condition will be set to 'ResizeStarted'.
|
||||
description: Current Condition of persistent volume claim. If underlying persistent volume is being resized then the Condition will be set to 'ResizeStarted'.
|
||||
items:
|
||||
description: PersistentVolumeClaimCondition contails details about state of pvc
|
||||
properties:
|
||||
lastProbeTime:
|
||||
description: lastProbeTime is the time we probed the condition.
|
||||
description: Last time we probed the condition.
|
||||
format: date-time
|
||||
type: string
|
||||
lastTransitionTime:
|
||||
description: lastTransitionTime is the time the condition transitioned from one status to another.
|
||||
description: Last time the condition transitioned from one status to another.
|
||||
format: date-time
|
||||
type: string
|
||||
message:
|
||||
description: message is the human-readable message indicating details about last transition.
|
||||
description: Human-readable message indicating details about last transition.
|
||||
type: string
|
||||
reason:
|
||||
description: reason is a unique, this should be a short, machine understandable string that gives the reason for condition's last transition. If it reports "ResizeStarted" that means the underlying persistent volume is being resized.
|
||||
description: Unique, this should be a short, machine understandable string that gives the reason for condition's last transition. If it reports "ResizeStarted" that means the underlying persistent volume is being resized.
|
||||
type: string
|
||||
status:
|
||||
type: string
|
||||
@ -2225,10 +2215,10 @@ spec:
|
||||
type: object
|
||||
type: array
|
||||
phase:
|
||||
description: phase represents the current phase of PersistentVolumeClaim.
|
||||
description: Phase represents the current phase of PersistentVolumeClaim.
|
||||
type: string
|
||||
resizeStatus:
|
||||
description: resizeStatus stores status of resize operation. ResizeStatus is not set by default but when expansion is complete resizeStatus is set to empty string by resize controller or kubelet. This is an alpha field and requires enabling RecoverVolumeExpansionFailure feature.
|
||||
description: ResizeStatus stores status of resize operation. ResizeStatus is not set by default but when expansion is complete resizeStatus is set to empty string by resize controller or kubelet. This is an alpha field and requires enabling RecoverVolumeExpansionFailure feature.
|
||||
type: string
|
||||
type: object
|
||||
type: object
|
||||
@ -2277,7 +2267,7 @@ spec:
|
||||
description: An eviction is allowed if at least "minAvailable" pods selected by "selector" will still be available after the eviction, i.e. even in the absence of the evicted pod. So for example you can prevent all voluntary evictions by specifying "100%".
|
||||
x-kubernetes-int-or-string: true
|
||||
selector:
|
||||
description: Label query over pods whose evictions are managed by the disruption budget. A null selector will match no pods, while an empty ({}) selector will select all pods within the namespace.
|
||||
description: Label query over pods whose evictions are managed by the disruption budget. A null selector selects no pods. An empty selector ({}) also selects no pods, which differs from standard behavior of selecting all pods. In policy/v1, an empty selector will select all pods in the namespace.
|
||||
properties:
|
||||
matchExpressions:
|
||||
description: matchExpressions is a list of label selector requirements. The requirements are ANDed.
|
||||
@ -2306,7 +2296,6 @@ spec:
|
||||
description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
|
||||
type: object
|
||||
type: object
|
||||
x-kubernetes-map-type: atomic
|
||||
type: object
|
||||
type: object
|
||||
remoteClusters:
|
||||
@ -2335,10 +2324,6 @@ spec:
|
||||
- name
|
||||
type: object
|
||||
type: array
|
||||
revisionHistoryLimit:
|
||||
description: RevisionHistoryLimit is the number of revisions to retain to allow rollback in the underlying StatefulSets.
|
||||
format: int32
|
||||
type: integer
|
||||
secureSettings:
|
||||
description: SecureSettings is a list of references to Kubernetes secrets containing sensitive configuration options for Elasticsearch.
|
||||
items:
|
||||
@ -2399,7 +2384,7 @@ spec:
|
||||
description: Spec is the specification of the service.
|
||||
properties:
|
||||
allocateLoadBalancerNodePorts:
|
||||
description: allocateLoadBalancerNodePorts defines if NodePorts will be automatically allocated for services with type LoadBalancer. Default is "true". It may be set to "false" if the cluster load-balancer does not rely on NodePorts. If the caller requests specific NodePorts (by specifying a value), those requests will be respected, regardless of this field. This field may only be set for services with type LoadBalancer and will be cleared if the type is changed to any other type.
|
||||
description: allocateLoadBalancerNodePorts defines if NodePorts will be automatically allocated for services with type LoadBalancer. Default is "true". It may be set to "false" if the cluster load-balancer does not rely on NodePorts. If the caller requests specific NodePorts (by specifying a value), those requests will be respected, regardless of this field. This field may only be set for services with type LoadBalancer and will be cleared if the type is changed to any other type. This field is beta-level and is only honored by servers that enable the ServiceLBNodePortControl feature.
|
||||
type: boolean
|
||||
clusterIP:
|
||||
description: 'clusterIP is the IP address of the service and is usually assigned randomly. If an address is specified manually, is in-range (as per system configuration), and is not in use, it will be allocated to the service; otherwise creation of the service will fail. This field may not be changed through updates unless the type field is also being changed to ExternalName (which requires this field to be blank) or the type field is being changed from ExternalName (in which case this field may optionally be specified, as describe above). Valid values are "None", empty string (""), or a valid IP address. Setting this to "None" makes a "headless service" (no virtual IP), which is useful when direct endpoint connections are preferred and proxying is not required. Only applies to types ClusterIP, NodePort, and LoadBalancer. If this field is specified when creating a Service of type ExternalName, creation will fail. This field will be wiped when updating a Service to type ExternalName. More info: https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies'
|
||||
@ -2442,7 +2427,7 @@ spec:
|
||||
description: loadBalancerClass is the class of the load balancer implementation this Service belongs to. If specified, the value of this field must be a label-style identifier, with an optional prefix, e.g. "internal-vip" or "example.com/internal-vip". Unprefixed names are reserved for end-users. This field can only be set when the Service type is 'LoadBalancer'. If not set, the default load balancer implementation is used, today this is typically done through the cloud provider integration, but should apply for any default implementation. If set, it is assumed that a load balancer implementation is watching for Services with a matching class. Any default load balancer implementation (e.g. cloud providers) should ignore Services that set this field. This field can only be set when creating or updating a Service to type 'LoadBalancer'. Once set, it can not be changed. This field will be wiped when a service is updated to a non 'LoadBalancer' type.
|
||||
type: string
|
||||
loadBalancerIP:
|
||||
description: 'Only applies to Service Type: LoadBalancer. This feature depends on whether the underlying cloud-provider supports specifying the loadBalancerIP when a load balancer is created. This field will be ignored if the cloud-provider does not support the feature. Deprecated: This field was under-specified and its meaning varies across implementations, and it cannot support dual-stack. As of Kubernetes v1.24, users are encouraged to use implementation-specific annotations when available. This field may be removed in a future API version.'
|
||||
description: 'Only applies to Service Type: LoadBalancer LoadBalancer will get created with the IP specified in this field. This feature depends on whether the underlying cloud-provider supports specifying the loadBalancerIP when a load balancer is created. This field will be ignored if the cloud-provider does not support the feature.'
|
||||
type: string
|
||||
loadBalancerSourceRanges:
|
||||
description: 'If specified and supported by the platform, this will restrict traffic through the cloud-provider load-balancer will be restricted to the specified client IPs. This field will be ignored if the cloud-provider does not support the feature." More info: https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/'
|
||||
@ -2455,7 +2440,7 @@ spec:
|
||||
description: ServicePort contains information on service's port.
|
||||
properties:
|
||||
appProtocol:
|
||||
description: The application protocol for this port. This field follows standard Kubernetes label syntax. Un-prefixed names are reserved for IANA standard service names (as per RFC-6335 and https://www.iana.org/assignments/service-names). Non-standard protocols should use prefixed names such as mycompany.com/my-custom-protocol.
|
||||
description: The application protocol for this port. This field follows standard Kubernetes label syntax. Un-prefixed names are reserved for IANA standard service names (as per RFC-6335 and http://www.iana.org/assignments/service-names). Non-standard protocols should use prefixed names such as mycompany.com/my-custom-protocol.
|
||||
type: string
|
||||
name:
|
||||
description: The name of this port within the service. This must be a DNS_LABEL. All ports within a ServiceSpec must have unique names. When considering the endpoints for a Service, this must match the 'name' field in the EndpointPort. Optional if only one ServicePort is defined on this service.
|
||||
@ -2779,7 +2764,7 @@ spec:
|
||||
description: Spec is the specification of the service.
|
||||
properties:
|
||||
allocateLoadBalancerNodePorts:
|
||||
description: allocateLoadBalancerNodePorts defines if NodePorts will be automatically allocated for services with type LoadBalancer. Default is "true". It may be set to "false" if the cluster load-balancer does not rely on NodePorts. If the caller requests specific NodePorts (by specifying a value), those requests will be respected, regardless of this field. This field may only be set for services with type LoadBalancer and will be cleared if the type is changed to any other type.
|
||||
description: allocateLoadBalancerNodePorts defines if NodePorts will be automatically allocated for services with type LoadBalancer. Default is "true". It may be set to "false" if the cluster load-balancer does not rely on NodePorts. If the caller requests specific NodePorts (by specifying a value), those requests will be respected, regardless of this field. This field may only be set for services with type LoadBalancer and will be cleared if the type is changed to any other type. This field is beta-level and is only honored by servers that enable the ServiceLBNodePortControl feature.
|
||||
type: boolean
|
||||
clusterIP:
|
||||
description: 'clusterIP is the IP address of the service and is usually assigned randomly. If an address is specified manually, is in-range (as per system configuration), and is not in use, it will be allocated to the service; otherwise creation of the service will fail. This field may not be changed through updates unless the type field is also being changed to ExternalName (which requires this field to be blank) or the type field is being changed from ExternalName (in which case this field may optionally be specified, as describe above). Valid values are "None", empty string (""), or a valid IP address. Setting this to "None" makes a "headless service" (no virtual IP), which is useful when direct endpoint connections are preferred and proxying is not required. Only applies to types ClusterIP, NodePort, and LoadBalancer. If this field is specified when creating a Service of type ExternalName, creation will fail. This field will be wiped when updating a Service to type ExternalName. More info: https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies'
|
||||
@ -2822,7 +2807,7 @@ spec:
|
||||
description: loadBalancerClass is the class of the load balancer implementation this Service belongs to. If specified, the value of this field must be a label-style identifier, with an optional prefix, e.g. "internal-vip" or "example.com/internal-vip". Unprefixed names are reserved for end-users. This field can only be set when the Service type is 'LoadBalancer'. If not set, the default load balancer implementation is used, today this is typically done through the cloud provider integration, but should apply for any default implementation. If set, it is assumed that a load balancer implementation is watching for Services with a matching class. Any default load balancer implementation (e.g. cloud providers) should ignore Services that set this field. This field can only be set when creating or updating a Service to type 'LoadBalancer'. Once set, it can not be changed. This field will be wiped when a service is updated to a non 'LoadBalancer' type.
|
||||
type: string
|
||||
loadBalancerIP:
|
||||
description: 'Only applies to Service Type: LoadBalancer. This feature depends on whether the underlying cloud-provider supports specifying the loadBalancerIP when a load balancer is created. This field will be ignored if the cloud-provider does not support the feature. Deprecated: This field was under-specified and its meaning varies across implementations, and it cannot support dual-stack. As of Kubernetes v1.24, users are encouraged to use implementation-specific annotations when available. This field may be removed in a future API version.'
|
||||
description: 'Only applies to Service Type: LoadBalancer LoadBalancer will get created with the IP specified in this field. This feature depends on whether the underlying cloud-provider supports specifying the loadBalancerIP when a load balancer is created. This field will be ignored if the cloud-provider does not support the feature.'
|
||||
type: string
|
||||
loadBalancerSourceRanges:
|
||||
description: 'If specified and supported by the platform, this will restrict traffic through the cloud-provider load-balancer will be restricted to the specified client IPs. This field will be ignored if the cloud-provider does not support the feature." More info: https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/'
|
||||
@ -2835,7 +2820,7 @@ spec:
|
||||
description: ServicePort contains information on service's port.
|
||||
properties:
|
||||
appProtocol:
|
||||
description: The application protocol for this port. This field follows standard Kubernetes label syntax. Un-prefixed names are reserved for IANA standard service names (as per RFC-6335 and https://www.iana.org/assignments/service-names). Non-standard protocols should use prefixed names such as mycompany.com/my-custom-protocol.
|
||||
description: The application protocol for this port. This field follows standard Kubernetes label syntax. Un-prefixed names are reserved for IANA standard service names (as per RFC-6335 and http://www.iana.org/assignments/service-names). Non-standard protocols should use prefixed names such as mycompany.com/my-custom-protocol.
|
||||
type: string
|
||||
name:
|
||||
description: The name of this port within the service. This must be a DNS_LABEL. All ports within a ServiceSpec must have unique names. When considering the endpoints for a Service, this must match the 'name' field in the EndpointPort. Optional if only one ServicePort is defined on this service.
|
||||
@ -2983,15 +2968,15 @@ spec:
|
||||
type: string
|
||||
type: object
|
||||
spec:
|
||||
description: 'spec defines the desired characteristics of a volume requested by a pod author. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims'
|
||||
description: 'Spec defines the desired characteristics of a volume requested by a pod author. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims'
|
||||
properties:
|
||||
accessModes:
|
||||
description: 'accessModes contains the desired access modes the volume should have. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1'
|
||||
description: 'AccessModes contains the desired access modes the volume should have. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1'
|
||||
items:
|
||||
type: string
|
||||
type: array
|
||||
dataSource:
|
||||
description: 'dataSource field can be used to specify either: * An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) * An existing PVC (PersistentVolumeClaim) If the provisioner or an external controller can support the specified data source, it will create a new volume based on the contents of the specified data source. If the AnyVolumeDataSource feature gate is enabled, this field will always have the same contents as the DataSourceRef field.'
|
||||
description: 'This field can be used to specify either: * An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) * An existing PVC (PersistentVolumeClaim) If the provisioner or an external controller can support the specified data source, it will create a new volume based on the contents of the specified data source. If the AnyVolumeDataSource feature gate is enabled, this field will always have the same contents as the DataSourceRef field.'
|
||||
properties:
|
||||
apiGroup:
|
||||
description: APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required.
|
||||
@ -3006,9 +2991,8 @@ spec:
|
||||
- kind
|
||||
- name
|
||||
type: object
|
||||
x-kubernetes-map-type: atomic
|
||||
dataSourceRef:
|
||||
description: 'dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any local object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the DataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, both fields (DataSource and DataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. There are two important differences between DataSource and DataSourceRef: * While DataSource only allows two specific types of objects, DataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While DataSource ignores disallowed values (dropping them), DataSourceRef preserves all values, and generates an error if a disallowed value is specified. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled.'
|
||||
description: 'Specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any local object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the DataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, both fields (DataSource and DataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. There are two important differences between DataSource and DataSourceRef: * While DataSource only allows two specific types of objects, DataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While DataSource ignores disallowed values (dropping them), DataSourceRef preserves all values, and generates an error if a disallowed value is specified. (Alpha) Using this field requires the AnyVolumeDataSource feature gate to be enabled.'
|
||||
properties:
|
||||
apiGroup:
|
||||
description: APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required.
|
||||
@ -3023,9 +3007,8 @@ spec:
|
||||
- kind
|
||||
- name
|
||||
type: object
|
||||
x-kubernetes-map-type: atomic
|
||||
resources:
|
||||
description: 'resources represents the minimum resources the volume should have. If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements that are lower than previous value but must still be higher than capacity recorded in the status field of the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources'
|
||||
description: 'Resources represents the minimum resources the volume should have. If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements that are lower than previous value but must still be higher than capacity recorded in the status field of the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources'
|
||||
properties:
|
||||
limits:
|
||||
additionalProperties:
|
||||
@ -3047,7 +3030,7 @@ spec:
|
||||
type: object
|
||||
type: object
|
||||
selector:
|
||||
description: selector is a label query over volumes to consider for binding.
|
||||
description: A label query over volumes to consider for binding.
|
||||
properties:
|
||||
matchExpressions:
|
||||
description: matchExpressions is a list of label selector requirements. The requirements are ANDed.
|
||||
@ -3076,22 +3059,21 @@ spec:
|
||||
description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
|
||||
type: object
|
||||
type: object
|
||||
x-kubernetes-map-type: atomic
|
||||
storageClassName:
|
||||
description: 'storageClassName is the name of the StorageClass required by the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1'
|
||||
description: 'Name of the StorageClass required by the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1'
|
||||
type: string
|
||||
volumeMode:
|
||||
description: volumeMode defines what type of volume is required by the claim. Value of Filesystem is implied when not included in claim spec.
|
||||
type: string
|
||||
volumeName:
|
||||
description: volumeName is the binding reference to the PersistentVolume backing this claim.
|
||||
description: VolumeName is the binding reference to the PersistentVolume backing this claim.
|
||||
type: string
|
||||
type: object
|
||||
status:
|
||||
description: 'status represents the current information/status of a persistent volume claim. Read-only. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims'
|
||||
description: 'Status represents the current information/status of a persistent volume claim. Read-only. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims'
|
||||
properties:
|
||||
accessModes:
|
||||
description: 'accessModes contains the actual access modes the volume backing the PVC has. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1'
|
||||
description: 'AccessModes contains the actual access modes the volume backing the PVC has. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1'
|
||||
items:
|
||||
type: string
|
||||
type: array
|
||||
@ -3102,7 +3084,7 @@ spec:
|
||||
- type: string
|
||||
pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
|
||||
x-kubernetes-int-or-string: true
|
||||
description: allocatedResources is the storage resource within AllocatedResources tracks the capacity allocated to a PVC. It may be larger than the actual capacity when a volume expansion operation is requested. For storage quota, the larger value from allocatedResources and PVC.spec.resources is used. If allocatedResources is not set, PVC.spec.resources alone is used for quota calculation. If a volume expansion capacity request is lowered, allocatedResources is only lowered if there are no expansion operations in progress and if the actual volume capacity is equal or lower than the requested capacity. This is an alpha field and requires enabling RecoverVolumeExpansionFailure feature.
|
||||
description: The storage resource within AllocatedResources tracks the capacity allocated to a PVC. It may be larger than the actual capacity when a volume expansion operation is requested. For storage quota, the larger value from allocatedResources and PVC.spec.resources is used. If allocatedResources is not set, PVC.spec.resources alone is used for quota calculation. If a volume expansion capacity request is lowered, allocatedResources is only lowered if there are no expansion operations in progress and if the actual volume capacity is equal or lower than the requested capacity. This is an alpha field and requires enabling RecoverVolumeExpansionFailure feature.
|
||||
type: object
|
||||
capacity:
|
||||
additionalProperties:
|
||||
@ -3111,26 +3093,26 @@ spec:
|
||||
- type: string
|
||||
pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
|
||||
x-kubernetes-int-or-string: true
|
||||
description: capacity represents the actual resources of the underlying volume.
|
||||
description: Represents the actual resources of the underlying volume.
|
||||
type: object
|
||||
conditions:
|
||||
description: conditions is the current Condition of persistent volume claim. If underlying persistent volume is being resized then the Condition will be set to 'ResizeStarted'.
|
||||
description: Current Condition of persistent volume claim. If underlying persistent volume is being resized then the Condition will be set to 'ResizeStarted'.
|
||||
items:
|
||||
description: PersistentVolumeClaimCondition contails details about state of pvc
|
||||
properties:
|
||||
lastProbeTime:
|
||||
description: lastProbeTime is the time we probed the condition.
|
||||
description: Last time we probed the condition.
|
||||
format: date-time
|
||||
type: string
|
||||
lastTransitionTime:
|
||||
description: lastTransitionTime is the time the condition transitioned from one status to another.
|
||||
description: Last time the condition transitioned from one status to another.
|
||||
format: date-time
|
||||
type: string
|
||||
message:
|
||||
description: message is the human-readable message indicating details about last transition.
|
||||
description: Human-readable message indicating details about last transition.
|
||||
type: string
|
||||
reason:
|
||||
description: reason is a unique, this should be a short, machine understandable string that gives the reason for condition's last transition. If it reports "ResizeStarted" that means the underlying persistent volume is being resized.
|
||||
description: Unique, this should be a short, machine understandable string that gives the reason for condition's last transition. If it reports "ResizeStarted" that means the underlying persistent volume is being resized.
|
||||
type: string
|
||||
status:
|
||||
type: string
|
||||
@ -3143,10 +3125,10 @@ spec:
|
||||
type: object
|
||||
type: array
|
||||
phase:
|
||||
description: phase represents the current phase of PersistentVolumeClaim.
|
||||
description: Phase represents the current phase of PersistentVolumeClaim.
|
||||
type: string
|
||||
resizeStatus:
|
||||
description: resizeStatus stores status of resize operation. ResizeStatus is not set by default but when expansion is complete resizeStatus is set to empty string by resize controller or kubelet. This is an alpha field and requires enabling RecoverVolumeExpansionFailure feature.
|
||||
description: ResizeStatus stores status of resize operation. ResizeStatus is not set by default but when expansion is complete resizeStatus is set to empty string by resize controller or kubelet. This is an alpha field and requires enabling RecoverVolumeExpansionFailure feature.
|
||||
type: string
|
||||
type: object
|
||||
type: object
|
||||
@ -3225,7 +3207,6 @@ spec:
|
||||
description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
|
||||
type: object
|
||||
type: object
|
||||
x-kubernetes-map-type: atomic
|
||||
type: object
|
||||
type: object
|
||||
secureSettings:
|
||||
@ -3302,18 +3283,24 @@ spec:
|
||||
type: object
|
||||
served: false
|
||||
storage: false
|
||||
status:
|
||||
acceptedNames:
|
||||
kind: ""
|
||||
plural: ""
|
||||
conditions: []
|
||||
storedVersions: []
|
||||
---
|
||||
# Source: eck-operator-crds/templates/all-crds.yaml
|
||||
apiVersion: apiextensions.k8s.io/v1
|
||||
kind: CustomResourceDefinition
|
||||
metadata:
|
||||
annotations:
|
||||
controller-gen.kubebuilder.io/version: v0.9.1
|
||||
controller-gen.kubebuilder.io/version: v0.8.0
|
||||
creationTimestamp: null
|
||||
labels:
|
||||
app.kubernetes.io/instance: 'elastic-operator'
|
||||
app.kubernetes.io/name: 'eck-operator-crds'
|
||||
app.kubernetes.io/version: '2.4.0'
|
||||
app.kubernetes.io/version: '2.2.0'
|
||||
name: enterprisesearches.enterprisesearch.k8s.elastic.co
|
||||
spec:
|
||||
group: enterprisesearch.k8s.elastic.co
|
||||
@ -3420,7 +3407,7 @@ spec:
|
||||
description: Spec is the specification of the service.
|
||||
properties:
|
||||
allocateLoadBalancerNodePorts:
|
||||
description: allocateLoadBalancerNodePorts defines if NodePorts will be automatically allocated for services with type LoadBalancer. Default is "true". It may be set to "false" if the cluster load-balancer does not rely on NodePorts. If the caller requests specific NodePorts (by specifying a value), those requests will be respected, regardless of this field. This field may only be set for services with type LoadBalancer and will be cleared if the type is changed to any other type.
|
||||
description: allocateLoadBalancerNodePorts defines if NodePorts will be automatically allocated for services with type LoadBalancer. Default is "true". It may be set to "false" if the cluster load-balancer does not rely on NodePorts. If the caller requests specific NodePorts (by specifying a value), those requests will be respected, regardless of this field. This field may only be set for services with type LoadBalancer and will be cleared if the type is changed to any other type. This field is beta-level and is only honored by servers that enable the ServiceLBNodePortControl feature.
|
||||
type: boolean
|
||||
clusterIP:
|
||||
description: 'clusterIP is the IP address of the service and is usually assigned randomly. If an address is specified manually, is in-range (as per system configuration), and is not in use, it will be allocated to the service; otherwise creation of the service will fail. This field may not be changed through updates unless the type field is also being changed to ExternalName (which requires this field to be blank) or the type field is being changed from ExternalName (in which case this field may optionally be specified, as describe above). Valid values are "None", empty string (""), or a valid IP address. Setting this to "None" makes a "headless service" (no virtual IP), which is useful when direct endpoint connections are preferred and proxying is not required. Only applies to types ClusterIP, NodePort, and LoadBalancer. If this field is specified when creating a Service of type ExternalName, creation will fail. This field will be wiped when updating a Service to type ExternalName. More info: https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies'
|
||||
@ -3463,7 +3450,7 @@ spec:
|
||||
description: loadBalancerClass is the class of the load balancer implementation this Service belongs to. If specified, the value of this field must be a label-style identifier, with an optional prefix, e.g. "internal-vip" or "example.com/internal-vip". Unprefixed names are reserved for end-users. This field can only be set when the Service type is 'LoadBalancer'. If not set, the default load balancer implementation is used, today this is typically done through the cloud provider integration, but should apply for any default implementation. If set, it is assumed that a load balancer implementation is watching for Services with a matching class. Any default load balancer implementation (e.g. cloud providers) should ignore Services that set this field. This field can only be set when creating or updating a Service to type 'LoadBalancer'. Once set, it can not be changed. This field will be wiped when a service is updated to a non 'LoadBalancer' type.
|
||||
type: string
|
||||
loadBalancerIP:
|
||||
description: 'Only applies to Service Type: LoadBalancer. This feature depends on whether the underlying cloud-provider supports specifying the loadBalancerIP when a load balancer is created. This field will be ignored if the cloud-provider does not support the feature. Deprecated: This field was under-specified and its meaning varies across implementations, and it cannot support dual-stack. As of Kubernetes v1.24, users are encouraged to use implementation-specific annotations when available. This field may be removed in a future API version.'
|
||||
description: 'Only applies to Service Type: LoadBalancer LoadBalancer will get created with the IP specified in this field. This feature depends on whether the underlying cloud-provider supports specifying the loadBalancerIP when a load balancer is created. This field will be ignored if the cloud-provider does not support the feature.'
|
||||
type: string
|
||||
loadBalancerSourceRanges:
|
||||
description: 'If specified and supported by the platform, this will restrict traffic through the cloud-provider load-balancer will be restricted to the specified client IPs. This field will be ignored if the cloud-provider does not support the feature." More info: https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/'
|
||||
@ -3476,7 +3463,7 @@ spec:
|
||||
description: ServicePort contains information on service's port.
|
||||
properties:
|
||||
appProtocol:
|
||||
description: The application protocol for this port. This field follows standard Kubernetes label syntax. Un-prefixed names are reserved for IANA standard service names (as per RFC-6335 and https://www.iana.org/assignments/service-names). Non-standard protocols should use prefixed names such as mycompany.com/my-custom-protocol.
|
||||
description: The application protocol for this port. This field follows standard Kubernetes label syntax. Un-prefixed names are reserved for IANA standard service names (as per RFC-6335 and http://www.iana.org/assignments/service-names). Non-standard protocols should use prefixed names such as mycompany.com/my-custom-protocol.
|
||||
type: string
|
||||
name:
|
||||
description: The name of this port within the service. This must be a DNS_LABEL. All ports within a ServiceSpec must have unique names. When considering the endpoints for a Service, this must match the 'name' field in the EndpointPort. Optional if only one ServicePort is defined on this service.
|
||||
@ -3575,10 +3562,6 @@ spec:
|
||||
description: PodTemplate provides customisation options (labels, annotations, affinity rules, resource requests, and so on) for the Enterprise Search pods.
|
||||
type: object
|
||||
x-kubernetes-preserve-unknown-fields: true
|
||||
revisionHistoryLimit:
|
||||
description: RevisionHistoryLimit is the number of revisions to retain to allow rollback in the underlying Deployment.
|
||||
format: int32
|
||||
type: integer
|
||||
serviceAccountName:
|
||||
description: ServiceAccountName is used to check access from the current resource to a resource (for ex. Elasticsearch) in a different namespace. Can only be used if ECK is enforcing RBAC on references.
|
||||
type: string
|
||||
@ -3603,10 +3586,6 @@ spec:
|
||||
health:
|
||||
description: Health of the deployment.
|
||||
type: string
|
||||
observedGeneration:
|
||||
description: ObservedGeneration represents the .metadata.generation that the status is based upon. It corresponds to the metadata generation, which is updated on mutation by the API Server. If the generation observed in status diverges from the generation in metadata, the Enterprise Search controller has not yet processed the changes contained in the Enterprise Search specification.
|
||||
format: int64
|
||||
type: integer
|
||||
selector:
|
||||
description: Selector is the label selector used to find all pods.
|
||||
type: string
|
||||
@ -3718,7 +3697,7 @@ spec:
|
||||
description: Spec is the specification of the service.
|
||||
properties:
|
||||
allocateLoadBalancerNodePorts:
|
||||
description: allocateLoadBalancerNodePorts defines if NodePorts will be automatically allocated for services with type LoadBalancer. Default is "true". It may be set to "false" if the cluster load-balancer does not rely on NodePorts. If the caller requests specific NodePorts (by specifying a value), those requests will be respected, regardless of this field. This field may only be set for services with type LoadBalancer and will be cleared if the type is changed to any other type.
|
||||
description: allocateLoadBalancerNodePorts defines if NodePorts will be automatically allocated for services with type LoadBalancer. Default is "true". It may be set to "false" if the cluster load-balancer does not rely on NodePorts. If the caller requests specific NodePorts (by specifying a value), those requests will be respected, regardless of this field. This field may only be set for services with type LoadBalancer and will be cleared if the type is changed to any other type. This field is beta-level and is only honored by servers that enable the ServiceLBNodePortControl feature.
|
||||
type: boolean
|
||||
clusterIP:
|
||||
description: 'clusterIP is the IP address of the service and is usually assigned randomly. If an address is specified manually, is in-range (as per system configuration), and is not in use, it will be allocated to the service; otherwise creation of the service will fail. This field may not be changed through updates unless the type field is also being changed to ExternalName (which requires this field to be blank) or the type field is being changed from ExternalName (in which case this field may optionally be specified, as describe above). Valid values are "None", empty string (""), or a valid IP address. Setting this to "None" makes a "headless service" (no virtual IP), which is useful when direct endpoint connections are preferred and proxying is not required. Only applies to types ClusterIP, NodePort, and LoadBalancer. If this field is specified when creating a Service of type ExternalName, creation will fail. This field will be wiped when updating a Service to type ExternalName. More info: https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies'
|
||||
@ -3761,7 +3740,7 @@ spec:
|
||||
description: loadBalancerClass is the class of the load balancer implementation this Service belongs to. If specified, the value of this field must be a label-style identifier, with an optional prefix, e.g. "internal-vip" or "example.com/internal-vip". Unprefixed names are reserved for end-users. This field can only be set when the Service type is 'LoadBalancer'. If not set, the default load balancer implementation is used, today this is typically done through the cloud provider integration, but should apply for any default implementation. If set, it is assumed that a load balancer implementation is watching for Services with a matching class. Any default load balancer implementation (e.g. cloud providers) should ignore Services that set this field. This field can only be set when creating or updating a Service to type 'LoadBalancer'. Once set, it can not be changed. This field will be wiped when a service is updated to a non 'LoadBalancer' type.
|
||||
type: string
|
||||
loadBalancerIP:
|
||||
description: 'Only applies to Service Type: LoadBalancer. This feature depends on whether the underlying cloud-provider supports specifying the loadBalancerIP when a load balancer is created. This field will be ignored if the cloud-provider does not support the feature. Deprecated: This field was under-specified and its meaning varies across implementations, and it cannot support dual-stack. As of Kubernetes v1.24, users are encouraged to use implementation-specific annotations when available. This field may be removed in a future API version.'
|
||||
description: 'Only applies to Service Type: LoadBalancer LoadBalancer will get created with the IP specified in this field. This feature depends on whether the underlying cloud-provider supports specifying the loadBalancerIP when a load balancer is created. This field will be ignored if the cloud-provider does not support the feature.'
|
||||
type: string
|
||||
loadBalancerSourceRanges:
|
||||
description: 'If specified and supported by the platform, this will restrict traffic through the cloud-provider load-balancer will be restricted to the specified client IPs. This field will be ignored if the cloud-provider does not support the feature." More info: https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/'
|
||||
@ -3774,7 +3753,7 @@ spec:
|
||||
description: ServicePort contains information on service's port.
|
||||
properties:
|
||||
appProtocol:
|
||||
description: The application protocol for this port. This field follows standard Kubernetes label syntax. Un-prefixed names are reserved for IANA standard service names (as per RFC-6335 and https://www.iana.org/assignments/service-names). Non-standard protocols should use prefixed names such as mycompany.com/my-custom-protocol.
|
||||
description: The application protocol for this port. This field follows standard Kubernetes label syntax. Un-prefixed names are reserved for IANA standard service names (as per RFC-6335 and http://www.iana.org/assignments/service-names). Non-standard protocols should use prefixed names such as mycompany.com/my-custom-protocol.
|
||||
type: string
|
||||
name:
|
||||
description: The name of this port within the service. This must be a DNS_LABEL. All ports within a ServiceSpec must have unique names. When considering the endpoints for a Service, this must match the 'name' field in the EndpointPort. Optional if only one ServicePort is defined on this service.
|
||||
@ -3912,18 +3891,24 @@ spec:
|
||||
storage: false
|
||||
subresources:
|
||||
status: {}
|
||||
status:
|
||||
acceptedNames:
|
||||
kind: ""
|
||||
plural: ""
|
||||
conditions: []
|
||||
storedVersions: []
|
||||
---
|
||||
# Source: eck-operator-crds/templates/all-crds.yaml
|
||||
apiVersion: apiextensions.k8s.io/v1
|
||||
kind: CustomResourceDefinition
|
||||
metadata:
|
||||
annotations:
|
||||
controller-gen.kubebuilder.io/version: v0.9.1
|
||||
controller-gen.kubebuilder.io/version: v0.8.0
|
||||
creationTimestamp: null
|
||||
labels:
|
||||
app.kubernetes.io/instance: 'elastic-operator'
|
||||
app.kubernetes.io/name: 'eck-operator-crds'
|
||||
app.kubernetes.io/version: '2.4.0'
|
||||
app.kubernetes.io/version: '2.2.0'
|
||||
name: kibanas.kibana.k8s.elastic.co
|
||||
spec:
|
||||
group: kibana.k8s.elastic.co
|
||||
@ -4039,7 +4024,7 @@ spec:
|
||||
description: Spec is the specification of the service.
|
||||
properties:
|
||||
allocateLoadBalancerNodePorts:
|
||||
description: allocateLoadBalancerNodePorts defines if NodePorts will be automatically allocated for services with type LoadBalancer. Default is "true". It may be set to "false" if the cluster load-balancer does not rely on NodePorts. If the caller requests specific NodePorts (by specifying a value), those requests will be respected, regardless of this field. This field may only be set for services with type LoadBalancer and will be cleared if the type is changed to any other type.
|
||||
description: allocateLoadBalancerNodePorts defines if NodePorts will be automatically allocated for services with type LoadBalancer. Default is "true". It may be set to "false" if the cluster load-balancer does not rely on NodePorts. If the caller requests specific NodePorts (by specifying a value), those requests will be respected, regardless of this field. This field may only be set for services with type LoadBalancer and will be cleared if the type is changed to any other type. This field is beta-level and is only honored by servers that enable the ServiceLBNodePortControl feature.
|
||||
type: boolean
|
||||
clusterIP:
|
||||
description: 'clusterIP is the IP address of the service and is usually assigned randomly. If an address is specified manually, is in-range (as per system configuration), and is not in use, it will be allocated to the service; otherwise creation of the service will fail. This field may not be changed through updates unless the type field is also being changed to ExternalName (which requires this field to be blank) or the type field is being changed from ExternalName (in which case this field may optionally be specified, as describe above). Valid values are "None", empty string (""), or a valid IP address. Setting this to "None" makes a "headless service" (no virtual IP), which is useful when direct endpoint connections are preferred and proxying is not required. Only applies to types ClusterIP, NodePort, and LoadBalancer. If this field is specified when creating a Service of type ExternalName, creation will fail. This field will be wiped when updating a Service to type ExternalName. More info: https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies'
|
||||
@ -4082,7 +4067,7 @@ spec:
|
||||
description: loadBalancerClass is the class of the load balancer implementation this Service belongs to. If specified, the value of this field must be a label-style identifier, with an optional prefix, e.g. "internal-vip" or "example.com/internal-vip". Unprefixed names are reserved for end-users. This field can only be set when the Service type is 'LoadBalancer'. If not set, the default load balancer implementation is used, today this is typically done through the cloud provider integration, but should apply for any default implementation. If set, it is assumed that a load balancer implementation is watching for Services with a matching class. Any default load balancer implementation (e.g. cloud providers) should ignore Services that set this field. This field can only be set when creating or updating a Service to type 'LoadBalancer'. Once set, it can not be changed. This field will be wiped when a service is updated to a non 'LoadBalancer' type.
|
||||
type: string
|
||||
loadBalancerIP:
|
||||
description: 'Only applies to Service Type: LoadBalancer. This feature depends on whether the underlying cloud-provider supports specifying the loadBalancerIP when a load balancer is created. This field will be ignored if the cloud-provider does not support the feature. Deprecated: This field was under-specified and its meaning varies across implementations, and it cannot support dual-stack. As of Kubernetes v1.24, users are encouraged to use implementation-specific annotations when available. This field may be removed in a future API version.'
|
||||
description: 'Only applies to Service Type: LoadBalancer LoadBalancer will get created with the IP specified in this field. This feature depends on whether the underlying cloud-provider supports specifying the loadBalancerIP when a load balancer is created. This field will be ignored if the cloud-provider does not support the feature.'
|
||||
type: string
|
||||
loadBalancerSourceRanges:
|
||||
description: 'If specified and supported by the platform, this will restrict traffic through the cloud-provider load-balancer will be restricted to the specified client IPs. This field will be ignored if the cloud-provider does not support the feature." More info: https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/'
|
||||
@ -4095,7 +4080,7 @@ spec:
|
||||
description: ServicePort contains information on service's port.
|
||||
properties:
|
||||
appProtocol:
|
||||
description: The application protocol for this port. This field follows standard Kubernetes label syntax. Un-prefixed names are reserved for IANA standard service names (as per RFC-6335 and https://www.iana.org/assignments/service-names). Non-standard protocols should use prefixed names such as mycompany.com/my-custom-protocol.
|
||||
description: The application protocol for this port. This field follows standard Kubernetes label syntax. Un-prefixed names are reserved for IANA standard service names (as per RFC-6335 and http://www.iana.org/assignments/service-names). Non-standard protocols should use prefixed names such as mycompany.com/my-custom-protocol.
|
||||
type: string
|
||||
name:
|
||||
description: The name of this port within the service. This must be a DNS_LABEL. All ports within a ServiceSpec must have unique names. When considering the endpoints for a Service, this must match the 'name' field in the EndpointPort. Optional if only one ServicePort is defined on this service.
|
||||
@ -4244,10 +4229,6 @@ spec:
|
||||
description: PodTemplate provides customisation options (labels, annotations, affinity rules, resource requests, and so on) for the Kibana pods
|
||||
type: object
|
||||
x-kubernetes-preserve-unknown-fields: true
|
||||
revisionHistoryLimit:
|
||||
description: RevisionHistoryLimit is the number of revisions to retain to allow rollback in the underlying Deployment.
|
||||
format: int32
|
||||
type: integer
|
||||
secureSettings:
|
||||
description: SecureSettings is a list of references to Kubernetes secrets containing sensitive configuration options for Kibana.
|
||||
items:
|
||||
@ -4414,7 +4395,7 @@ spec:
|
||||
description: Spec is the specification of the service.
|
||||
properties:
|
||||
allocateLoadBalancerNodePorts:
|
||||
description: allocateLoadBalancerNodePorts defines if NodePorts will be automatically allocated for services with type LoadBalancer. Default is "true". It may be set to "false" if the cluster load-balancer does not rely on NodePorts. If the caller requests specific NodePorts (by specifying a value), those requests will be respected, regardless of this field. This field may only be set for services with type LoadBalancer and will be cleared if the type is changed to any other type.
|
||||
description: allocateLoadBalancerNodePorts defines if NodePorts will be automatically allocated for services with type LoadBalancer. Default is "true". It may be set to "false" if the cluster load-balancer does not rely on NodePorts. If the caller requests specific NodePorts (by specifying a value), those requests will be respected, regardless of this field. This field may only be set for services with type LoadBalancer and will be cleared if the type is changed to any other type. This field is beta-level and is only honored by servers that enable the ServiceLBNodePortControl feature.
|
||||
type: boolean
|
||||
clusterIP:
|
||||
description: 'clusterIP is the IP address of the service and is usually assigned randomly. If an address is specified manually, is in-range (as per system configuration), and is not in use, it will be allocated to the service; otherwise creation of the service will fail. This field may not be changed through updates unless the type field is also being changed to ExternalName (which requires this field to be blank) or the type field is being changed from ExternalName (in which case this field may optionally be specified, as describe above). Valid values are "None", empty string (""), or a valid IP address. Setting this to "None" makes a "headless service" (no virtual IP), which is useful when direct endpoint connections are preferred and proxying is not required. Only applies to types ClusterIP, NodePort, and LoadBalancer. If this field is specified when creating a Service of type ExternalName, creation will fail. This field will be wiped when updating a Service to type ExternalName. More info: https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies'
|
||||
@ -4457,7 +4438,7 @@ spec:
|
||||
description: loadBalancerClass is the class of the load balancer implementation this Service belongs to. If specified, the value of this field must be a label-style identifier, with an optional prefix, e.g. "internal-vip" or "example.com/internal-vip". Unprefixed names are reserved for end-users. This field can only be set when the Service type is 'LoadBalancer'. If not set, the default load balancer implementation is used, today this is typically done through the cloud provider integration, but should apply for any default implementation. If set, it is assumed that a load balancer implementation is watching for Services with a matching class. Any default load balancer implementation (e.g. cloud providers) should ignore Services that set this field. This field can only be set when creating or updating a Service to type 'LoadBalancer'. Once set, it can not be changed. This field will be wiped when a service is updated to a non 'LoadBalancer' type.
|
||||
type: string
|
||||
loadBalancerIP:
|
||||
description: 'Only applies to Service Type: LoadBalancer. This feature depends on whether the underlying cloud-provider supports specifying the loadBalancerIP when a load balancer is created. This field will be ignored if the cloud-provider does not support the feature. Deprecated: This field was under-specified and its meaning varies across implementations, and it cannot support dual-stack. As of Kubernetes v1.24, users are encouraged to use implementation-specific annotations when available. This field may be removed in a future API version.'
|
||||
description: 'Only applies to Service Type: LoadBalancer LoadBalancer will get created with the IP specified in this field. This feature depends on whether the underlying cloud-provider supports specifying the loadBalancerIP when a load balancer is created. This field will be ignored if the cloud-provider does not support the feature.'
|
||||
type: string
|
||||
loadBalancerSourceRanges:
|
||||
description: 'If specified and supported by the platform, this will restrict traffic through the cloud-provider load-balancer will be restricted to the specified client IPs. This field will be ignored if the cloud-provider does not support the feature." More info: https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/'
|
||||
@ -4470,7 +4451,7 @@ spec:
|
||||
description: ServicePort contains information on service's port.
|
||||
properties:
|
||||
appProtocol:
|
||||
description: The application protocol for this port. This field follows standard Kubernetes label syntax. Un-prefixed names are reserved for IANA standard service names (as per RFC-6335 and https://www.iana.org/assignments/service-names). Non-standard protocols should use prefixed names such as mycompany.com/my-custom-protocol.
|
||||
description: The application protocol for this port. This field follows standard Kubernetes label syntax. Un-prefixed names are reserved for IANA standard service names (as per RFC-6335 and http://www.iana.org/assignments/service-names). Non-standard protocols should use prefixed names such as mycompany.com/my-custom-protocol.
|
||||
type: string
|
||||
name:
|
||||
description: The name of this port within the service. This must be a DNS_LABEL. All ports within a ServiceSpec must have unique names. When considering the endpoints for a Service, this must match the 'name' field in the EndpointPort. Optional if only one ServicePort is defined on this service.
|
||||
@ -4625,4 +4606,10 @@ spec:
|
||||
type: object
|
||||
served: false
|
||||
storage: false
|
||||
status:
|
||||
acceptedNames:
|
||||
kind: ""
|
||||
plural: ""
|
||||
conditions: []
|
||||
storedVersions: []
|
||||
|
||||
|
@ -14,7 +14,7 @@ metadata:
|
||||
namespace: elastic-system
|
||||
labels:
|
||||
control-plane: elastic-operator
|
||||
app.kubernetes.io/version: "2.4.0"
|
||||
app.kubernetes.io/version: "2.2.0"
|
||||
---
|
||||
# Source: eck-operator/templates/webhook.yaml
|
||||
apiVersion: v1
|
||||
@ -24,7 +24,7 @@ metadata:
|
||||
namespace: elastic-system
|
||||
labels:
|
||||
control-plane: elastic-operator
|
||||
app.kubernetes.io/version: "2.4.0"
|
||||
app.kubernetes.io/version: "2.2.0"
|
||||
---
|
||||
# Source: eck-operator/templates/configmap.yaml
|
||||
apiVersion: v1
|
||||
@ -34,7 +34,7 @@ metadata:
|
||||
namespace: elastic-system
|
||||
labels:
|
||||
control-plane: elastic-operator
|
||||
app.kubernetes.io/version: "2.4.0"
|
||||
app.kubernetes.io/version: "2.2.0"
|
||||
data:
|
||||
eck.yaml: |-
|
||||
log-verbosity: 0
|
||||
@ -54,7 +54,6 @@ data:
|
||||
validate-storage-class: true
|
||||
enable-webhook: true
|
||||
webhook-name: elastic-webhook.k8s.elastic.co
|
||||
enable-leader-election: true
|
||||
---
|
||||
# Source: eck-operator/templates/cluster-roles.yaml
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
@ -63,7 +62,7 @@ metadata:
|
||||
name: elastic-operator
|
||||
labels:
|
||||
control-plane: elastic-operator
|
||||
app.kubernetes.io/version: "2.4.0"
|
||||
app.kubernetes.io/version: "2.2.0"
|
||||
rules:
|
||||
- apiGroups:
|
||||
- "authorization.k8s.io"
|
||||
@ -71,22 +70,6 @@ rules:
|
||||
- subjectaccessreviews
|
||||
verbs:
|
||||
- create
|
||||
- apiGroups:
|
||||
- coordination.k8s.io
|
||||
resources:
|
||||
- leases
|
||||
verbs:
|
||||
- create
|
||||
- apiGroups:
|
||||
- coordination.k8s.io
|
||||
resources:
|
||||
- leases
|
||||
resourceNames:
|
||||
- elastic-operator-leader
|
||||
verbs:
|
||||
- get
|
||||
- watch
|
||||
- update
|
||||
- apiGroups:
|
||||
- ""
|
||||
resources:
|
||||
@ -268,7 +251,7 @@ metadata:
|
||||
rbac.authorization.k8s.io/aggregate-to-edit: "true"
|
||||
rbac.authorization.k8s.io/aggregate-to-admin: "true"
|
||||
control-plane: elastic-operator
|
||||
app.kubernetes.io/version: "2.4.0"
|
||||
app.kubernetes.io/version: "2.2.0"
|
||||
rules:
|
||||
- apiGroups: ["elasticsearch.k8s.elastic.co"]
|
||||
resources: ["elasticsearches"]
|
||||
@ -301,7 +284,7 @@ metadata:
|
||||
rbac.authorization.k8s.io/aggregate-to-edit: "true"
|
||||
rbac.authorization.k8s.io/aggregate-to-admin: "true"
|
||||
control-plane: elastic-operator
|
||||
app.kubernetes.io/version: "2.4.0"
|
||||
app.kubernetes.io/version: "2.2.0"
|
||||
rules:
|
||||
- apiGroups: ["elasticsearch.k8s.elastic.co"]
|
||||
resources: ["elasticsearches"]
|
||||
@ -332,7 +315,7 @@ metadata:
|
||||
name: elastic-operator
|
||||
labels:
|
||||
control-plane: elastic-operator
|
||||
app.kubernetes.io/version: "2.4.0"
|
||||
app.kubernetes.io/version: "2.2.0"
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: ClusterRole
|
||||
@ -350,7 +333,7 @@ metadata:
|
||||
namespace: elastic-system
|
||||
labels:
|
||||
control-plane: elastic-operator
|
||||
app.kubernetes.io/version: "2.4.0"
|
||||
app.kubernetes.io/version: "2.2.0"
|
||||
spec:
|
||||
ports:
|
||||
- name: https
|
||||
@ -367,7 +350,7 @@ metadata:
|
||||
namespace: elastic-system
|
||||
labels:
|
||||
control-plane: elastic-operator
|
||||
app.kubernetes.io/version: "2.4.0"
|
||||
app.kubernetes.io/version: "2.2.0"
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
@ -380,7 +363,7 @@ spec:
|
||||
# Rename the fields "error" to "error.message" and "source" to "event.source"
|
||||
# This is to avoid a conflict with the ECS "error" and "source" documents.
|
||||
"co.elastic.logs/raw": "[{\"type\":\"container\",\"json.keys_under_root\":true,\"paths\":[\"/var/log/containers/*${data.kubernetes.container.id}.log\"],\"processors\":[{\"convert\":{\"mode\":\"rename\",\"ignore_missing\":true,\"fields\":[{\"from\":\"error\",\"to\":\"_error\"}]}},{\"convert\":{\"mode\":\"rename\",\"ignore_missing\":true,\"fields\":[{\"from\":\"_error\",\"to\":\"error.message\"}]}},{\"convert\":{\"mode\":\"rename\",\"ignore_missing\":true,\"fields\":[{\"from\":\"source\",\"to\":\"_source\"}]}},{\"convert\":{\"mode\":\"rename\",\"ignore_missing\":true,\"fields\":[{\"from\":\"_source\",\"to\":\"event.source\"}]}}]}]"
|
||||
"checksum/config": a99a5f63f628a1ca8df440c12506cdfbf17827a1175dc5765b05f22f92b12b95
|
||||
"checksum/config": 302bbb79b6fb0ffa41fcc06e164252c7dad887cf4d8149c8e1e5203c7651277e
|
||||
labels:
|
||||
control-plane: elastic-operator
|
||||
spec:
|
||||
@ -389,7 +372,7 @@ spec:
|
||||
securityContext:
|
||||
runAsNonRoot: true
|
||||
containers:
|
||||
- image: "docker.elastic.co/eck/eck-operator:2.4.0"
|
||||
- image: "docker.elastic.co/eck/eck-operator:2.2.0"
|
||||
imagePullPolicy: IfNotPresent
|
||||
name: manager
|
||||
args:
|
||||
@ -440,7 +423,7 @@ metadata:
|
||||
name: elastic-webhook.k8s.elastic.co
|
||||
labels:
|
||||
control-plane: elastic-operator
|
||||
app.kubernetes.io/version: "2.4.0"
|
||||
app.kubernetes.io/version: "2.2.0"
|
||||
webhooks:
|
||||
- clientConfig:
|
||||
caBundle: Cg==
|
||||
|
@ -79,6 +79,7 @@ metadata:
|
||||
namespace: etherpad
|
||||
annotations:
|
||||
kubernetes.io/ingress.class: traefik
|
||||
cert-manager.io/cluster-issuer: default
|
||||
traefik.ingress.kubernetes.io/router.entrypoints: websecure
|
||||
traefik.ingress.kubernetes.io/router.tls: "true"
|
||||
external-dns.alpha.kubernetes.io/target: traefik.k-space.ee
|
||||
@ -96,7 +97,8 @@ spec:
|
||||
number: 9001
|
||||
tls:
|
||||
- hosts:
|
||||
- "*.k-space.ee"
|
||||
- pad.k-space.ee
|
||||
secretName: pad-tls
|
||||
---
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: NetworkPolicy
|
||||
|
@ -2,9 +2,9 @@ Before applying replace the secret with the actual one.
|
||||
|
||||
For debugging add `- --log-level=debug`:
|
||||
|
||||
|
||||
```
|
||||
wget https://raw.githubusercontent.com/kubernetes-sigs/external-dns/master/docs/contributing/crd-source/crd-manifest.yaml -O crd.yml
|
||||
kubectl apply -n external-dns -f application.yml -f crd.yml
|
||||
kubectl apply -n external-dns -f external-dns.yml
|
||||
```
|
||||
|
||||
Insert TSIG secret:
|
||||
|
@ -24,20 +24,6 @@ rules:
|
||||
- get
|
||||
- list
|
||||
- watch
|
||||
- apiGroups:
|
||||
- externaldns.k8s.io
|
||||
resources:
|
||||
- dnsendpoints
|
||||
verbs:
|
||||
- get
|
||||
- watch
|
||||
- list
|
||||
- apiGroups:
|
||||
- externaldns.k8s.io
|
||||
resources:
|
||||
- dnsendpoints/status
|
||||
verbs:
|
||||
- update
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
@ -77,7 +63,7 @@ spec:
|
||||
serviceAccountName: external-dns
|
||||
containers:
|
||||
- name: external-dns
|
||||
image: k8s.gcr.io/external-dns/external-dns:v0.13.1
|
||||
image: k8s.gcr.io/external-dns/external-dns:v0.10.2
|
||||
envFrom:
|
||||
- secretRef:
|
||||
name: tsig-secret
|
||||
|
@ -1,19 +0,0 @@
|
||||
# Grafana
|
||||
|
||||
```
|
||||
kubectl create namespace grafana
|
||||
kubectl apply -n grafana -f application.yml
|
||||
```
|
||||
|
||||
## OIDC secret
|
||||
|
||||
See Authelia README on provisioning and updating OIDC secrets for Grafana
|
||||
|
||||
## Grafana post deployment steps
|
||||
|
||||
* Configure Prometheus datasource with URL set to
|
||||
`http://prometheus-operated.prometheus-operator.svc.cluster.local:9090`
|
||||
* Configure Elasticsearch datasource with URL set to
|
||||
`http://elasticsearch.elastic-system.svc.cluster.local`,
|
||||
Time field name set to `timestamp` and
|
||||
ElasticSearch version set to `7.10+`
|
@ -1,135 +0,0 @@
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: grafana-config
|
||||
data:
|
||||
grafana.ini: |
|
||||
[log]
|
||||
level = warn
|
||||
[server]
|
||||
domain = grafana.k-space.ee
|
||||
root_url = https://%(domain)s/
|
||||
[auth.generic_oauth]
|
||||
name = OAuth
|
||||
icon = signin
|
||||
enabled = true
|
||||
client_id = grafana
|
||||
scopes = openid profile email groups
|
||||
empty_scopes = false
|
||||
auth_url = https://auth.k-space.ee/api/oidc/authorize
|
||||
token_url = https://auth.k-space.ee/api/oidc/token
|
||||
api_url = https://auth.k-space.ee/api/oidc/userinfo
|
||||
allow_sign_up = true
|
||||
role_attribute_path = contains(groups[*], 'Grafana Admins') && 'Admin' || 'Viewer'
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: StatefulSet
|
||||
metadata:
|
||||
labels:
|
||||
app: grafana
|
||||
name: grafana
|
||||
spec:
|
||||
revisionHistoryLimit: 0
|
||||
serviceName: grafana
|
||||
selector:
|
||||
matchLabels:
|
||||
app: grafana
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: grafana
|
||||
spec:
|
||||
securityContext:
|
||||
fsGroup: 472
|
||||
containers:
|
||||
- name: grafana
|
||||
image: grafana/grafana:8.5.0
|
||||
securityContext:
|
||||
readOnlyRootFilesystem: true
|
||||
runAsNonRoot: true
|
||||
runAsUser: 472
|
||||
envFrom:
|
||||
- secretRef:
|
||||
name: oidc-secret
|
||||
ports:
|
||||
- containerPort: 3000
|
||||
name: http-grafana
|
||||
protocol: TCP
|
||||
readinessProbe:
|
||||
failureThreshold: 3
|
||||
httpGet:
|
||||
path: /robots.txt
|
||||
port: 3000
|
||||
scheme: HTTP
|
||||
initialDelaySeconds: 10
|
||||
periodSeconds: 30
|
||||
successThreshold: 1
|
||||
timeoutSeconds: 2
|
||||
livenessProbe:
|
||||
failureThreshold: 3
|
||||
initialDelaySeconds: 30
|
||||
periodSeconds: 10
|
||||
successThreshold: 1
|
||||
tcpSocket:
|
||||
port: 3000
|
||||
timeoutSeconds: 1
|
||||
resources:
|
||||
requests:
|
||||
cpu: 250m
|
||||
memory: 750Mi
|
||||
volumeMounts:
|
||||
- mountPath: /var/lib/grafana
|
||||
name: grafana-data
|
||||
- mountPath: /etc/grafana
|
||||
name: grafana-config
|
||||
volumes:
|
||||
- name: grafana-config
|
||||
configMap:
|
||||
name: grafana-config
|
||||
volumeClaimTemplates:
|
||||
- metadata:
|
||||
name: grafana-data
|
||||
spec:
|
||||
storageClassName: longhorn
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
resources:
|
||||
requests:
|
||||
storage: 1Gi
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: grafana
|
||||
spec:
|
||||
ports:
|
||||
- port: 80
|
||||
protocol: TCP
|
||||
targetPort: http-grafana
|
||||
selector:
|
||||
app: grafana
|
||||
---
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: grafana
|
||||
annotations:
|
||||
traefik.ingress.kubernetes.io/router.entrypoints: websecure
|
||||
traefik.ingress.kubernetes.io/router.tls: "true"
|
||||
external-dns.alpha.kubernetes.io/target: traefik.k-space.ee
|
||||
spec:
|
||||
rules:
|
||||
- host: grafana.k-space.ee
|
||||
http:
|
||||
paths:
|
||||
- pathType: Prefix
|
||||
path: "/"
|
||||
backend:
|
||||
service:
|
||||
name: grafana
|
||||
port:
|
||||
number: 80
|
||||
tls:
|
||||
- hosts:
|
||||
- "*.k-space.ee"
|
@ -35,7 +35,7 @@ data:
|
||||
TRIVY_ADAPTER_URL: "http://harbor-trivy:8080"
|
||||
REGISTRY_STORAGE_PROVIDER_NAME: "filesystem"
|
||||
WITH_CHARTMUSEUM: "false"
|
||||
LOG_LEVEL: "warning"
|
||||
LOG_LEVEL: "info"
|
||||
CONFIG_PATH: "/etc/core/app.conf"
|
||||
CHART_CACHE_DRIVER: "redis"
|
||||
_REDIS_URL_CORE: "redis://harbor-redis:6379/0?idle_timeout_seconds=30"
|
||||
@ -397,6 +397,7 @@ spec:
|
||||
containers:
|
||||
- name: core
|
||||
image: goharbor/harbor-core:v2.4.2
|
||||
imagePullPolicy: IfNotPresent
|
||||
startupProbe:
|
||||
httpGet:
|
||||
path: /api/v2.0/ping
|
||||
@ -405,9 +406,16 @@ spec:
|
||||
failureThreshold: 360
|
||||
initialDelaySeconds: 10
|
||||
periodSeconds: 10
|
||||
livenessProbe:
|
||||
httpGet:
|
||||
path: /api/v2.0/ping
|
||||
scheme: HTTP
|
||||
port: 8080
|
||||
failureThreshold: 2
|
||||
periodSeconds: 10
|
||||
readinessProbe:
|
||||
httpGet:
|
||||
path: /api/v2.0/projects
|
||||
path: /api/v2.0/ping
|
||||
scheme: HTTP
|
||||
port: 8080
|
||||
failureThreshold: 2
|
||||
@ -464,13 +472,6 @@ spec:
|
||||
secret:
|
||||
- name: psc
|
||||
emptyDir: {}
|
||||
nodeSelector:
|
||||
dedicated: storage
|
||||
tolerations:
|
||||
- key: dedicated
|
||||
operator: Equal
|
||||
value: storage
|
||||
effect: NoSchedule
|
||||
---
|
||||
# Source: harbor/templates/jobservice/jobservice-dpl.yaml
|
||||
apiVersion: apps/v1
|
||||
@ -501,6 +502,14 @@ spec:
|
||||
containers:
|
||||
- name: jobservice
|
||||
image: goharbor/harbor-jobservice:v2.4.2
|
||||
imagePullPolicy: IfNotPresent
|
||||
livenessProbe:
|
||||
httpGet:
|
||||
path: /api/v1/stats
|
||||
scheme: HTTP
|
||||
port: 8080
|
||||
initialDelaySeconds: 300
|
||||
periodSeconds: 10
|
||||
readinessProbe:
|
||||
httpGet:
|
||||
path: /api/v1/stats
|
||||
@ -535,13 +544,6 @@ spec:
|
||||
- name: job-logs
|
||||
persistentVolumeClaim:
|
||||
claimName: harbor-jobservice
|
||||
nodeSelector:
|
||||
dedicated: storage
|
||||
tolerations:
|
||||
- key: dedicated
|
||||
operator: Equal
|
||||
value: storage
|
||||
effect: NoSchedule
|
||||
---
|
||||
# Source: harbor/templates/portal/deployment.yaml
|
||||
apiVersion: apps/v1
|
||||
@ -572,6 +574,14 @@ spec:
|
||||
containers:
|
||||
- name: portal
|
||||
image: goharbor/harbor-portal:v2.4.2
|
||||
imagePullPolicy: IfNotPresent
|
||||
livenessProbe:
|
||||
httpGet:
|
||||
path: /
|
||||
scheme: HTTP
|
||||
port: 8080
|
||||
initialDelaySeconds: 300
|
||||
periodSeconds: 10
|
||||
readinessProbe:
|
||||
httpGet:
|
||||
path: /
|
||||
@ -589,13 +599,6 @@ spec:
|
||||
- name: portal-config
|
||||
configMap:
|
||||
name: "harbor-portal"
|
||||
nodeSelector:
|
||||
dedicated: storage
|
||||
tolerations:
|
||||
- key: dedicated
|
||||
operator: Equal
|
||||
value: storage
|
||||
effect: NoSchedule
|
||||
---
|
||||
# Source: harbor/templates/registry/registry-dpl.yaml
|
||||
apiVersion: apps/v1
|
||||
@ -626,6 +629,14 @@ spec:
|
||||
containers:
|
||||
- name: registry
|
||||
image: goharbor/registry-photon:v2.4.2
|
||||
imagePullPolicy: IfNotPresent
|
||||
livenessProbe:
|
||||
httpGet:
|
||||
path: /
|
||||
scheme: HTTP
|
||||
port: 5000
|
||||
initialDelaySeconds: 300
|
||||
periodSeconds: 10
|
||||
readinessProbe:
|
||||
httpGet:
|
||||
path: /
|
||||
@ -653,6 +664,14 @@ spec:
|
||||
subPath: config.yml
|
||||
- name: registryctl
|
||||
image: goharbor/harbor-registryctl:v2.4.2
|
||||
imagePullPolicy: IfNotPresent
|
||||
livenessProbe:
|
||||
httpGet:
|
||||
path: /api/health
|
||||
scheme: HTTP
|
||||
port: 8080
|
||||
initialDelaySeconds: 300
|
||||
periodSeconds: 10
|
||||
readinessProbe:
|
||||
httpGet:
|
||||
path: /api/health
|
||||
@ -703,13 +722,6 @@ spec:
|
||||
- name: registry-data
|
||||
persistentVolumeClaim:
|
||||
claimName: harbor-registry
|
||||
nodeSelector:
|
||||
dedicated: storage
|
||||
tolerations:
|
||||
- key: dedicated
|
||||
operator: Equal
|
||||
value: storage
|
||||
effect: NoSchedule
|
||||
---
|
||||
# Source: harbor/templates/database/database-ss.yaml
|
||||
apiVersion: apps/v1
|
||||
@ -744,6 +756,7 @@ spec:
|
||||
# we may remove it after several releases
|
||||
- name: "data-migrator"
|
||||
image: goharbor/harbor-db:v2.4.2
|
||||
imagePullPolicy: IfNotPresent
|
||||
command: ["/bin/sh"]
|
||||
args: ["-c", "[ -e /var/lib/postgresql/data/postgresql.conf ] && [ ! -d /var/lib/postgresql/data/pgdata ] && mkdir -m 0700 /var/lib/postgresql/data/pgdata && mv /var/lib/postgresql/data/* /var/lib/postgresql/data/pgdata/ || true"]
|
||||
volumeMounts:
|
||||
@ -756,6 +769,7 @@ spec:
|
||||
# as "fsGroup" applied before the init container running, the container has enough permission to execute the command
|
||||
- name: "data-permissions-ensurer"
|
||||
image: goharbor/harbor-db:v2.4.2
|
||||
imagePullPolicy: IfNotPresent
|
||||
command: ["/bin/sh"]
|
||||
args: ["-c", "chmod -R 700 /var/lib/postgresql/data/pgdata || true"]
|
||||
volumeMounts:
|
||||
@ -765,6 +779,13 @@ spec:
|
||||
containers:
|
||||
- name: database
|
||||
image: goharbor/harbor-db:v2.4.2
|
||||
imagePullPolicy: IfNotPresent
|
||||
livenessProbe:
|
||||
exec:
|
||||
command:
|
||||
- /docker-healthcheck.sh
|
||||
initialDelaySeconds: 300
|
||||
periodSeconds: 10
|
||||
readinessProbe:
|
||||
exec:
|
||||
command:
|
||||
@ -790,13 +811,6 @@ spec:
|
||||
emptyDir:
|
||||
medium: Memory
|
||||
sizeLimit: 512Mi
|
||||
nodeSelector:
|
||||
dedicated: storage
|
||||
tolerations:
|
||||
- key: dedicated
|
||||
operator: Equal
|
||||
value: storage
|
||||
effect: NoSchedule
|
||||
volumeClaimTemplates:
|
||||
- metadata:
|
||||
name: "database-data"
|
||||
@ -839,6 +853,12 @@ spec:
|
||||
containers:
|
||||
- name: redis
|
||||
image: goharbor/redis-photon:v2.4.2
|
||||
imagePullPolicy: IfNotPresent
|
||||
livenessProbe:
|
||||
tcpSocket:
|
||||
port: 6379
|
||||
initialDelaySeconds: 300
|
||||
periodSeconds: 10
|
||||
readinessProbe:
|
||||
tcpSocket:
|
||||
port: 6379
|
||||
@ -848,13 +868,6 @@ spec:
|
||||
- name: data
|
||||
mountPath: /var/lib/redis
|
||||
subPath:
|
||||
nodeSelector:
|
||||
dedicated: storage
|
||||
tolerations:
|
||||
- key: dedicated
|
||||
operator: Equal
|
||||
value: storage
|
||||
effect: NoSchedule
|
||||
volumeClaimTemplates:
|
||||
- metadata:
|
||||
name: data
|
||||
@ -957,6 +970,15 @@ spec:
|
||||
mountPath: /home/scanner/.cache
|
||||
subPath:
|
||||
readOnly: false
|
||||
livenessProbe:
|
||||
httpGet:
|
||||
scheme: HTTP
|
||||
path: /probe/healthy
|
||||
port: api-server
|
||||
initialDelaySeconds: 5
|
||||
periodSeconds: 10
|
||||
successThreshold: 1
|
||||
failureThreshold: 10
|
||||
readinessProbe:
|
||||
httpGet:
|
||||
scheme: HTTP
|
||||
@ -973,13 +995,6 @@ spec:
|
||||
requests:
|
||||
cpu: 200m
|
||||
memory: 512Mi
|
||||
nodeSelector:
|
||||
dedicated: storage
|
||||
tolerations:
|
||||
- key: dedicated
|
||||
operator: Equal
|
||||
value: storage
|
||||
effect: NoSchedule
|
||||
volumeClaimTemplates:
|
||||
- metadata:
|
||||
name: data
|
||||
@ -1001,6 +1016,7 @@ metadata:
|
||||
labels:
|
||||
app: harbor
|
||||
annotations:
|
||||
cert-manager.io/cluster-issuer: default
|
||||
external-dns.alpha.kubernetes.io/target: traefik.k-space.ee
|
||||
ingress.kubernetes.io/proxy-body-size: "0"
|
||||
ingress.kubernetes.io/ssl-redirect: "true"
|
||||
@ -1011,8 +1027,9 @@ metadata:
|
||||
traefik.ingress.kubernetes.io/router.tls: "true"
|
||||
spec:
|
||||
tls:
|
||||
- hosts:
|
||||
- "*.k-space.ee"
|
||||
- secretName: harbor-tls
|
||||
hosts:
|
||||
- harbor.k-space.ee
|
||||
rules:
|
||||
- http:
|
||||
paths:
|
||||
|
@ -1,165 +0,0 @@
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
name: descheduler
|
||||
namespace: kube-system
|
||||
labels:
|
||||
app.kubernetes.io/name: descheduler
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: descheduler
|
||||
namespace: kube-system
|
||||
labels:
|
||||
app.kubernetes.io/name: descheduler
|
||||
data:
|
||||
policy.yaml: |
|
||||
apiVersion: "descheduler/v1alpha1"
|
||||
kind: "DeschedulerPolicy"
|
||||
strategies:
|
||||
LowNodeUtilization:
|
||||
enabled: true
|
||||
params:
|
||||
nodeResourceUtilizationThresholds:
|
||||
targetThresholds:
|
||||
cpu: 50
|
||||
memory: 50
|
||||
pods: 50
|
||||
thresholds:
|
||||
cpu: 20
|
||||
memory: 20
|
||||
pods: 20
|
||||
RemoveDuplicates:
|
||||
enabled: true
|
||||
RemovePodsHavingTooManyRestarts:
|
||||
enabled: true
|
||||
params:
|
||||
podsHavingTooManyRestarts:
|
||||
includingInitContainers: true
|
||||
podRestartThreshold: 100
|
||||
RemovePodsViolatingInterPodAntiAffinity:
|
||||
enabled: true
|
||||
RemovePodsViolatingNodeAffinity:
|
||||
enabled: true
|
||||
params:
|
||||
nodeAffinityType:
|
||||
- requiredDuringSchedulingIgnoredDuringExecution
|
||||
RemovePodsViolatingNodeTaints:
|
||||
enabled: true
|
||||
RemovePodsViolatingTopologySpreadConstraint:
|
||||
enabled: true
|
||||
params:
|
||||
includeSoftConstraints: false
|
||||
---
|
||||
kind: ClusterRole
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
metadata:
|
||||
name: descheduler
|
||||
labels:
|
||||
app.kubernetes.io/name: descheduler
|
||||
rules:
|
||||
- apiGroups: ["events.k8s.io"]
|
||||
resources: ["events"]
|
||||
verbs: ["create", "update"]
|
||||
- apiGroups: [""]
|
||||
resources: ["nodes"]
|
||||
verbs: ["get", "watch", "list"]
|
||||
- apiGroups: [""]
|
||||
resources: ["namespaces"]
|
||||
verbs: ["get", "watch", "list"]
|
||||
- apiGroups: [""]
|
||||
resources: ["pods"]
|
||||
verbs: ["get", "watch", "list", "delete"]
|
||||
- apiGroups: [""]
|
||||
resources: ["pods/eviction"]
|
||||
verbs: ["create"]
|
||||
- apiGroups: ["scheduling.k8s.io"]
|
||||
resources: ["priorityclasses"]
|
||||
verbs: ["get", "watch", "list"]
|
||||
- apiGroups: ["coordination.k8s.io"]
|
||||
resources: ["leases"]
|
||||
verbs: ["create", "update"]
|
||||
- apiGroups: ["coordination.k8s.io"]
|
||||
resources: ["leases"]
|
||||
resourceNames: ["descheduler"]
|
||||
verbs: ["get", "patch", "delete"]
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRoleBinding
|
||||
metadata:
|
||||
name: descheduler
|
||||
labels:
|
||||
app.kubernetes.io/name: descheduler
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: ClusterRole
|
||||
name: descheduler
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: descheduler
|
||||
namespace: kube-system
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: descheduler
|
||||
namespace: kube-system
|
||||
labels:
|
||||
app.kubernetes.io/name: descheduler
|
||||
spec:
|
||||
replicas: 2
|
||||
selector:
|
||||
matchLabels: &selectorLabels
|
||||
app.kubernetes.io/name: descheduler
|
||||
template:
|
||||
metadata:
|
||||
labels: *selectorLabels
|
||||
spec:
|
||||
priorityClassName: system-cluster-critical
|
||||
serviceAccountName: descheduler
|
||||
containers:
|
||||
- name: descheduler
|
||||
image: "k8s.gcr.io/descheduler/descheduler:v0.25.1"
|
||||
imagePullPolicy: IfNotPresent
|
||||
command:
|
||||
- "/bin/descheduler"
|
||||
args:
|
||||
- "--policy-config-file"
|
||||
- "/policy-dir/policy.yaml"
|
||||
- "--descheduling-interval"
|
||||
- 5m
|
||||
- "--v"
|
||||
- "3"
|
||||
- --leader-elect=true
|
||||
ports:
|
||||
- containerPort: 10258
|
||||
protocol: TCP
|
||||
livenessProbe:
|
||||
failureThreshold: 3
|
||||
httpGet:
|
||||
path: /healthz
|
||||
port: 10258
|
||||
scheme: HTTPS
|
||||
initialDelaySeconds: 3
|
||||
periodSeconds: 10
|
||||
resources:
|
||||
requests:
|
||||
cpu: 500m
|
||||
memory: 256Mi
|
||||
securityContext:
|
||||
allowPrivilegeEscalation: false
|
||||
capabilities:
|
||||
drop:
|
||||
- ALL
|
||||
privileged: false
|
||||
readOnlyRootFilesystem: true
|
||||
runAsNonRoot: true
|
||||
volumeMounts:
|
||||
- mountPath: /policy-dir
|
||||
name: policy-volume
|
||||
volumes:
|
||||
- name: policy-volume
|
||||
configMap:
|
||||
name: descheduler
|
@ -159,9 +159,7 @@ spec:
|
||||
spec:
|
||||
automountServiceAccountToken: true
|
||||
containers:
|
||||
- image: registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.7.0
|
||||
args:
|
||||
- --metric-labels-allowlist=pods=[*]
|
||||
- image: registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.6.0
|
||||
livenessProbe:
|
||||
httpGet:
|
||||
path: /healthz
|
||||
@ -221,260 +219,3 @@ spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
app.kubernetes.io/name: kube-state-metrics
|
||||
---
|
||||
apiVersion: monitoring.coreos.com/v1
|
||||
kind: PrometheusRule
|
||||
metadata:
|
||||
name: kube-state-metrics
|
||||
spec:
|
||||
groups:
|
||||
- name: kube-state-metrics
|
||||
rules:
|
||||
- alert: KubernetesNodeReady
|
||||
expr: kube_node_status_condition{condition="Ready",status="true"} == 0
|
||||
for: 10m
|
||||
labels:
|
||||
severity: critical
|
||||
annotations:
|
||||
summary: Kubernetes Node ready (instance {{ $labels.instance }})
|
||||
description: "Node {{ $labels.node }} has been unready for a long time\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
|
||||
- alert: KubernetesMemoryPressure
|
||||
expr: kube_node_status_condition{condition="MemoryPressure",status="true"} == 1
|
||||
for: 2m
|
||||
labels:
|
||||
severity: critical
|
||||
annotations:
|
||||
summary: Kubernetes memory pressure (instance {{ $labels.instance }})
|
||||
description: "{{ $labels.node }} has MemoryPressure condition\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
|
||||
- alert: KubernetesDiskPressure
|
||||
expr: kube_node_status_condition{condition="DiskPressure",status="true"} == 1
|
||||
for: 2m
|
||||
labels:
|
||||
severity: critical
|
||||
annotations:
|
||||
summary: Kubernetes disk pressure (instance {{ $labels.instance }})
|
||||
description: "{{ $labels.node }} has DiskPressure condition\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
|
||||
- alert: KubernetesOutOfDisk
|
||||
expr: kube_node_status_condition{condition="OutOfDisk",status="true"} == 1
|
||||
for: 2m
|
||||
labels:
|
||||
severity: critical
|
||||
annotations:
|
||||
summary: Kubernetes out of disk (instance {{ $labels.instance }})
|
||||
description: "{{ $labels.node }} has OutOfDisk condition\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
|
||||
- alert: KubernetesOutOfCapacity
|
||||
expr: sum by (node) ((kube_pod_status_phase{phase="Running"} == 1) + on(uid) group_left(node) (0 * kube_pod_info{pod_template_hash=""})) / sum by (node) (kube_node_status_allocatable{resource="pods"}) * 100 > 90
|
||||
for: 2m
|
||||
labels:
|
||||
severity: warning
|
||||
annotations:
|
||||
summary: Kubernetes out of capacity (instance {{ $labels.instance }})
|
||||
description: "{{ $labels.node }} is out of capacity\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
|
||||
- alert: KubernetesContainerOomKiller
|
||||
expr: (kube_pod_container_status_restarts_total - kube_pod_container_status_restarts_total offset 10m >= 1) and ignoring (reason) min_over_time(kube_pod_container_status_last_terminated_reason{reason="OOMKilled"}[10m]) == 1
|
||||
for: 0m
|
||||
labels:
|
||||
severity: warning
|
||||
annotations:
|
||||
summary: Kubernetes container oom killer (instance {{ $labels.instance }})
|
||||
description: "Container {{ $labels.container }} in pod {{ $labels.namespace }}/{{ $labels.pod }} has been OOMKilled {{ $value }} times in the last 10 minutes.\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
|
||||
- alert: KubernetesJobFailed
|
||||
expr: kube_job_status_failed > 0
|
||||
for: 0m
|
||||
labels:
|
||||
severity: warning
|
||||
annotations:
|
||||
summary: Kubernetes Job failed (instance {{ $labels.instance }})
|
||||
description: "Job {{$labels.namespace}}/{{$labels.exported_job}} failed to complete\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
|
||||
- alert: KubernetesCronjobSuspended
|
||||
expr: kube_cronjob_spec_suspend != 0
|
||||
for: 0m
|
||||
labels:
|
||||
severity: warning
|
||||
annotations:
|
||||
summary: Kubernetes CronJob suspended (instance {{ $labels.instance }})
|
||||
description: "CronJob {{ $labels.namespace }}/{{ $labels.cronjob }} is suspended\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
|
||||
- alert: KubernetesPersistentvolumeclaimPending
|
||||
expr: kube_persistentvolumeclaim_status_phase{phase="Pending"} == 1
|
||||
for: 2m
|
||||
labels:
|
||||
severity: warning
|
||||
annotations:
|
||||
summary: Kubernetes PersistentVolumeClaim pending (instance {{ $labels.instance }})
|
||||
description: "PersistentVolumeClaim {{ $labels.namespace }}/{{ $labels.persistentvolumeclaim }} is pending\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
|
||||
- alert: KubernetesVolumeOutOfDiskSpace
|
||||
expr: kubelet_volume_stats_available_bytes / kubelet_volume_stats_capacity_bytes * 100 < 10
|
||||
for: 2m
|
||||
labels:
|
||||
severity: warning
|
||||
annotations:
|
||||
summary: Kubernetes Volume out of disk space (instance {{ $labels.instance }})
|
||||
description: "Volume is almost full (< 10% left)\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
|
||||
- alert: KubernetesPersistentvolumeError
|
||||
expr: kube_persistentvolume_status_phase{phase=~"Failed|Pending", job="kube-state-metrics"} > 0
|
||||
for: 0m
|
||||
labels:
|
||||
severity: critical
|
||||
annotations:
|
||||
summary: Kubernetes PersistentVolume error (instance {{ $labels.instance }})
|
||||
description: "Persistent volume is in bad state\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
|
||||
- alert: KubernetesStatefulsetDown
|
||||
expr: (kube_statefulset_status_replicas_ready / kube_statefulset_status_replicas_current) != 1
|
||||
for: 1m
|
||||
labels:
|
||||
severity: critical
|
||||
annotations:
|
||||
summary: Kubernetes StatefulSet down (instance {{ $labels.instance }})
|
||||
description: "A StatefulSet went down\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
|
||||
- alert: KubernetesHpaScalingAbility
|
||||
expr: kube_horizontalpodautoscaler_status_condition{status="false", condition="AbleToScale"} == 1
|
||||
for: 2m
|
||||
labels:
|
||||
severity: warning
|
||||
annotations:
|
||||
summary: Kubernetes HPA scaling ability (instance {{ $labels.instance }})
|
||||
description: "Pod is unable to scale\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
|
||||
- alert: KubernetesHpaMetricAvailability
|
||||
expr: kube_horizontalpodautoscaler_status_condition{status="false", condition="ScalingActive"} == 1
|
||||
for: 0m
|
||||
labels:
|
||||
severity: warning
|
||||
annotations:
|
||||
summary: Kubernetes HPA metric availability (instance {{ $labels.instance }})
|
||||
description: "HPA is not able to collect metrics\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
|
||||
- alert: KubernetesHpaScaleCapability
|
||||
expr: kube_horizontalpodautoscaler_status_desired_replicas >= kube_horizontalpodautoscaler_spec_max_replicas
|
||||
for: 2m
|
||||
labels:
|
||||
severity: info
|
||||
annotations:
|
||||
summary: Kubernetes HPA scale capability (instance {{ $labels.instance }})
|
||||
description: "The maximum number of desired Pods has been hit\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
|
||||
- alert: KubernetesPodNotHealthy
|
||||
expr: min_over_time(sum by (namespace, pod) (kube_pod_status_phase{phase=~"Pending|Unknown|Failed"})[15m:1m]) > 0
|
||||
for: 0m
|
||||
labels:
|
||||
severity: critical
|
||||
annotations:
|
||||
summary: Kubernetes Pod not healthy (instance {{ $labels.instance }})
|
||||
description: "Pod has been in a non-ready state for longer than 15 minutes.\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
|
||||
- alert: KubernetesPodCrashLooping
|
||||
expr: increase(kube_pod_container_status_restarts_total[1m]) > 3
|
||||
for: 2m
|
||||
labels:
|
||||
severity: warning
|
||||
annotations:
|
||||
summary: Kubernetes pod crash looping (instance {{ $labels.instance }})
|
||||
description: "Pod {{ $labels.pod }} is crash looping\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
|
||||
- alert: KubernetesReplicassetMismatch
|
||||
expr: kube_replicaset_spec_replicas != kube_replicaset_status_ready_replicas
|
||||
for: 10m
|
||||
labels:
|
||||
severity: warning
|
||||
annotations:
|
||||
summary: Kubernetes ReplicasSet mismatch (instance {{ $labels.instance }})
|
||||
description: "Deployment Replicas mismatch\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
|
||||
- alert: KubernetesDeploymentReplicasMismatch
|
||||
expr: kube_deployment_spec_replicas != kube_deployment_status_replicas_available
|
||||
for: 10m
|
||||
labels:
|
||||
severity: warning
|
||||
annotations:
|
||||
summary: Kubernetes Deployment replicas mismatch (instance {{ $labels.instance }})
|
||||
description: "Deployment Replicas mismatch\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
|
||||
- alert: KubernetesStatefulsetReplicasMismatch
|
||||
expr: kube_statefulset_status_replicas_ready != kube_statefulset_status_replicas
|
||||
for: 10m
|
||||
labels:
|
||||
severity: warning
|
||||
annotations:
|
||||
summary: Kubernetes StatefulSet replicas mismatch (instance {{ $labels.instance }})
|
||||
description: "A StatefulSet does not match the expected number of replicas.\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
|
||||
- alert: KubernetesDeploymentGenerationMismatch
|
||||
expr: kube_deployment_status_observed_generation != kube_deployment_metadata_generation
|
||||
for: 10m
|
||||
labels:
|
||||
severity: critical
|
||||
annotations:
|
||||
summary: Kubernetes Deployment generation mismatch (instance {{ $labels.instance }})
|
||||
description: "A Deployment has failed but has not been rolled back.\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
|
||||
- alert: KubernetesStatefulsetGenerationMismatch
|
||||
expr: kube_statefulset_status_observed_generation != kube_statefulset_metadata_generation
|
||||
for: 10m
|
||||
labels:
|
||||
severity: critical
|
||||
annotations:
|
||||
summary: Kubernetes StatefulSet generation mismatch (instance {{ $labels.instance }})
|
||||
description: "A StatefulSet has failed but has not been rolled back.\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
|
||||
- alert: KubernetesStatefulsetUpdateNotRolledOut
|
||||
expr: max without (revision) (kube_statefulset_status_current_revision unless kube_statefulset_status_update_revision) * (kube_statefulset_replicas != kube_statefulset_status_replicas_updated)
|
||||
for: 10m
|
||||
labels:
|
||||
severity: warning
|
||||
annotations:
|
||||
summary: Kubernetes StatefulSet update not rolled out (instance {{ $labels.instance }})
|
||||
description: "StatefulSet update has not been rolled out.\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
|
||||
- alert: KubernetesDaemonsetRolloutStuck
|
||||
expr: kube_daemonset_status_number_ready / kube_daemonset_status_desired_number_scheduled * 100 < 100 or kube_daemonset_status_desired_number_scheduled - kube_daemonset_status_current_number_scheduled > 0
|
||||
for: 10m
|
||||
labels:
|
||||
severity: warning
|
||||
annotations:
|
||||
summary: Kubernetes DaemonSet rollout stuck (instance {{ $labels.instance }})
|
||||
description: "Some Pods of DaemonSet are not scheduled or not ready\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
|
||||
- alert: KubernetesDaemonsetMisscheduled
|
||||
expr: sum by (namespace, daemonset) (kube_daemonset_status_number_misscheduled) > 0
|
||||
for: 1m
|
||||
labels:
|
||||
severity: critical
|
||||
annotations:
|
||||
summary: Kubernetes DaemonSet misscheduled (instance {{ $labels.instance }})
|
||||
description: "Some DaemonSet Pods are running where they are not supposed to run\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
|
||||
- alert: KubernetesJobSlowCompletion
|
||||
expr: kube_job_spec_completions - kube_job_status_succeeded > 0
|
||||
for: 12h
|
||||
labels:
|
||||
severity: critical
|
||||
annotations:
|
||||
summary: Kubernetes job slow completion (instance {{ $labels.instance }})
|
||||
description: "Kubernetes Job {{ $labels.namespace }}/{{ $labels.job_name }} did not complete in time.\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
|
||||
- alert: KubernetesApiServerErrors
|
||||
expr: sum(rate(apiserver_request_total{job="apiserver",code=~"^(?:5..)$"}[1m])) / sum(rate(apiserver_request_total{job="apiserver"}[1m])) * 100 > 3
|
||||
for: 2m
|
||||
labels:
|
||||
severity: critical
|
||||
annotations:
|
||||
summary: Kubernetes API server errors (instance {{ $labels.instance }})
|
||||
description: "Kubernetes API server is experiencing high error rate\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
|
||||
- alert: KubernetesApiClientErrors
|
||||
expr: (sum(rate(rest_client_requests_total{code=~"(4|5).."}[1m])) by (instance, job) / sum(rate(rest_client_requests_total[1m])) by (instance, job)) * 100 > 1
|
||||
for: 2m
|
||||
labels:
|
||||
severity: critical
|
||||
annotations:
|
||||
summary: Kubernetes API client errors (instance {{ $labels.instance }})
|
||||
description: "Kubernetes API client is experiencing high error rate\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
|
||||
- alert: KubernetesClientCertificateExpiresNextWeek
|
||||
expr: apiserver_client_certificate_expiration_seconds_count{job="apiserver"} > 0 and histogram_quantile(0.01, sum by (job, le) (rate(apiserver_client_certificate_expiration_seconds_bucket{job="apiserver"}[5m]))) < 7*24*60*60
|
||||
for: 0m
|
||||
labels:
|
||||
severity: warning
|
||||
annotations:
|
||||
summary: Kubernetes client certificate expires next week (instance {{ $labels.instance }})
|
||||
description: "A client certificate used to authenticate to the apiserver is expiring next week.\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
|
||||
- alert: KubernetesClientCertificateExpiresSoon
|
||||
expr: apiserver_client_certificate_expiration_seconds_count{job="apiserver"} > 0 and histogram_quantile(0.01, sum by (job, le) (rate(apiserver_client_certificate_expiration_seconds_bucket{job="apiserver"}[5m]))) < 24*60*60
|
||||
for: 0m
|
||||
labels:
|
||||
severity: critical
|
||||
annotations:
|
||||
summary: Kubernetes client certificate expires soon (instance {{ $labels.instance }})
|
||||
description: "A client certificate used to authenticate to the apiserver is expiring in less than 24.0 hours.\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
|
||||
- alert: KubernetesApiServerLatency
|
||||
expr: histogram_quantile(0.99, sum(rate(apiserver_request_latencies_bucket{subresource!="log",verb!~"^(?:CONNECT|WATCHLIST|WATCH|PROXY)$"} [10m])) WITHOUT (instance, resource)) / 1e+06 > 1
|
||||
for: 2m
|
||||
labels:
|
||||
severity: warning
|
||||
annotations:
|
||||
summary: Kubernetes API server latency (instance {{ $labels.instance }})
|
||||
description: "Kubernetes API server has a 99th percentile latency of {{ $value }} seconds for {{ $labels.verb }} {{ $labels.resource }}.\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
|
||||
|
@ -1,197 +0,0 @@
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
labels:
|
||||
k8s-app: metrics-server
|
||||
name: metrics-server
|
||||
namespace: kube-system
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRole
|
||||
metadata:
|
||||
labels:
|
||||
k8s-app: metrics-server
|
||||
rbac.authorization.k8s.io/aggregate-to-admin: "true"
|
||||
rbac.authorization.k8s.io/aggregate-to-edit: "true"
|
||||
rbac.authorization.k8s.io/aggregate-to-view: "true"
|
||||
name: system:aggregated-metrics-reader
|
||||
rules:
|
||||
- apiGroups:
|
||||
- metrics.k8s.io
|
||||
resources:
|
||||
- pods
|
||||
- nodes
|
||||
verbs:
|
||||
- get
|
||||
- list
|
||||
- watch
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRole
|
||||
metadata:
|
||||
labels:
|
||||
k8s-app: metrics-server
|
||||
name: system:metrics-server
|
||||
rules:
|
||||
- apiGroups:
|
||||
- ""
|
||||
resources:
|
||||
- nodes/metrics
|
||||
verbs:
|
||||
- get
|
||||
- apiGroups:
|
||||
- ""
|
||||
resources:
|
||||
- pods
|
||||
- nodes
|
||||
verbs:
|
||||
- get
|
||||
- list
|
||||
- watch
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: RoleBinding
|
||||
metadata:
|
||||
labels:
|
||||
k8s-app: metrics-server
|
||||
name: metrics-server-auth-reader
|
||||
namespace: kube-system
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: Role
|
||||
name: extension-apiserver-authentication-reader
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: metrics-server
|
||||
namespace: kube-system
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRoleBinding
|
||||
metadata:
|
||||
labels:
|
||||
k8s-app: metrics-server
|
||||
name: metrics-server:system:auth-delegator
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: ClusterRole
|
||||
name: system:auth-delegator
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: metrics-server
|
||||
namespace: kube-system
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRoleBinding
|
||||
metadata:
|
||||
labels:
|
||||
k8s-app: metrics-server
|
||||
name: system:metrics-server
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: ClusterRole
|
||||
name: system:metrics-server
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: metrics-server
|
||||
namespace: kube-system
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
labels:
|
||||
k8s-app: metrics-server
|
||||
name: metrics-server
|
||||
namespace: kube-system
|
||||
spec:
|
||||
ports:
|
||||
- name: https
|
||||
port: 443
|
||||
protocol: TCP
|
||||
targetPort: https
|
||||
selector:
|
||||
k8s-app: metrics-server
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
labels:
|
||||
k8s-app: metrics-server
|
||||
name: metrics-server
|
||||
namespace: kube-system
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
k8s-app: metrics-server
|
||||
strategy:
|
||||
rollingUpdate:
|
||||
maxUnavailable: 0
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
k8s-app: metrics-server
|
||||
spec:
|
||||
containers:
|
||||
- args:
|
||||
- --cert-dir=/tmp
|
||||
- --secure-port=4443
|
||||
- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
|
||||
- --kubelet-use-node-status-port
|
||||
- --kubelet-insecure-tls
|
||||
- --metric-resolution=15s
|
||||
image: k8s.gcr.io/metrics-server/metrics-server:v0.6.1
|
||||
imagePullPolicy: IfNotPresent
|
||||
livenessProbe:
|
||||
failureThreshold: 3
|
||||
httpGet:
|
||||
path: /livez
|
||||
port: https
|
||||
scheme: HTTPS
|
||||
periodSeconds: 10
|
||||
name: metrics-server
|
||||
ports:
|
||||
- containerPort: 4443
|
||||
name: https
|
||||
protocol: TCP
|
||||
readinessProbe:
|
||||
failureThreshold: 3
|
||||
httpGet:
|
||||
path: /readyz
|
||||
port: https
|
||||
scheme: HTTPS
|
||||
initialDelaySeconds: 20
|
||||
periodSeconds: 10
|
||||
resources:
|
||||
requests:
|
||||
cpu: 100m
|
||||
memory: 200Mi
|
||||
securityContext:
|
||||
allowPrivilegeEscalation: false
|
||||
readOnlyRootFilesystem: true
|
||||
runAsNonRoot: true
|
||||
runAsUser: 1000
|
||||
volumeMounts:
|
||||
- mountPath: /tmp
|
||||
name: tmp-dir
|
||||
nodeSelector:
|
||||
kubernetes.io/os: linux
|
||||
priorityClassName: system-cluster-critical
|
||||
serviceAccountName: metrics-server
|
||||
volumes:
|
||||
- emptyDir: {}
|
||||
name: tmp-dir
|
||||
---
|
||||
apiVersion: apiregistration.k8s.io/v1
|
||||
kind: APIService
|
||||
metadata:
|
||||
labels:
|
||||
k8s-app: metrics-server
|
||||
name: v1beta1.metrics.k8s.io
|
||||
spec:
|
||||
group: metrics.k8s.io
|
||||
groupPriorityMinimum: 100
|
||||
insecureSkipTLSVerify: true
|
||||
service:
|
||||
name: metrics-server
|
||||
namespace: kube-system
|
||||
version: v1beta1
|
||||
versionPriority: 100
|
@ -269,6 +269,7 @@ metadata:
|
||||
certManager: "true"
|
||||
rewriteTarget: "true"
|
||||
annotations:
|
||||
cert-manager.io/cluster-issuer: default
|
||||
external-dns.alpha.kubernetes.io/target: traefik.k-space.ee
|
||||
kubernetes.io/ingress.class: traefik
|
||||
traefik.ingress.kubernetes.io/router.entrypoints: websecure
|
||||
@ -288,4 +289,5 @@ spec:
|
||||
number: 80
|
||||
tls:
|
||||
- hosts:
|
||||
- "*.k-space.ee"
|
||||
- dashboard.k-space.ee
|
||||
secretName: dashboard-tls
|
||||
|
@ -14,7 +14,7 @@ To deploy:
|
||||
|
||||
```
|
||||
kubectl create namespace logging
|
||||
kubectl apply -n logging -f zinc.yml -f application.yml -f filebeat.yml -f networkpolicy-base.yml
|
||||
kubectl apply -n logging -f mongodb-support.yml -f application.yml -f filebeat.yml -f networkpolicy-base.yml
|
||||
kubectl rollout restart -n logging daemonset.apps/filebeat
|
||||
```
|
||||
|
||||
|
452
logging/application.yml
Normal file
452
logging/application.yml
Normal file
@ -0,0 +1,452 @@
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: StatefulSet
|
||||
metadata:
|
||||
name: elasticsearch
|
||||
labels:
|
||||
app: elasticsearch
|
||||
spec:
|
||||
serviceName: elasticsearch
|
||||
revisionHistoryLimit: 0
|
||||
replicas: 1
|
||||
selector:
|
||||
matchLabels:
|
||||
app: elasticsearch
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: elasticsearch
|
||||
spec:
|
||||
securityContext:
|
||||
fsGroup: 1000
|
||||
containers:
|
||||
- name: elasticsearch
|
||||
image: elasticsearch:7.17.3
|
||||
securityContext:
|
||||
runAsNonRoot: true
|
||||
runAsUser: 1000
|
||||
env:
|
||||
- name: discovery.type
|
||||
value: single-node
|
||||
- name: xpack.security.enabled
|
||||
value: "false"
|
||||
ports:
|
||||
- containerPort: 9200
|
||||
readinessProbe:
|
||||
httpGet:
|
||||
path: /_cluster/health
|
||||
port: 9200
|
||||
initialDelaySeconds: 5
|
||||
periodSeconds: 10
|
||||
failureThreshold: 3
|
||||
successThreshold: 1
|
||||
timeoutSeconds: 5
|
||||
resources:
|
||||
limits:
|
||||
memory: "2147483648"
|
||||
volumeMounts:
|
||||
- name: elasticsearch-data
|
||||
mountPath: /usr/share/elasticsearch/data
|
||||
- name: elasticsearch-tmp
|
||||
mountPath: /tmp/
|
||||
volumes:
|
||||
- emptyDir: {}
|
||||
name: elasticsearch-keystore
|
||||
- emptyDir: {}
|
||||
name: elasticsearch-tmp
|
||||
- emptyDir: {}
|
||||
name: elasticsearch-logs
|
||||
volumeClaimTemplates:
|
||||
- metadata:
|
||||
name: elasticsearch-data
|
||||
spec:
|
||||
accessModes:
|
||||
- "ReadWriteOnce"
|
||||
resources:
|
||||
requests:
|
||||
storage: "10Gi"
|
||||
storageClassName: longhorn
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: elasticsearch
|
||||
labels:
|
||||
app: elasticsearch
|
||||
spec:
|
||||
ports:
|
||||
- name: api
|
||||
port: 80
|
||||
targetPort: 9200
|
||||
selector:
|
||||
app: elasticsearch
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: graylog-gelf-tcp
|
||||
labels:
|
||||
app: graylog
|
||||
spec:
|
||||
ports:
|
||||
- name: graylog-gelf-tcp
|
||||
port: 12201
|
||||
protocol: TCP
|
||||
targetPort: 12201
|
||||
selector:
|
||||
app: graylog
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: graylog-logstash
|
||||
labels:
|
||||
app: graylog
|
||||
spec:
|
||||
ports:
|
||||
- name: graylog-logstash
|
||||
port: 5044
|
||||
protocol: TCP
|
||||
selector:
|
||||
app: graylog
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: graylog-syslog-tcp
|
||||
labels:
|
||||
app: graylog
|
||||
annotations:
|
||||
external-dns.alpha.kubernetes.io/hostname: syslog.k-space.ee
|
||||
metallb.universe.tf/allow-shared-ip: syslog.k-space.ee
|
||||
spec:
|
||||
type: LoadBalancer
|
||||
externalTrafficPolicy: Local
|
||||
loadBalancerIP: 172.20.51.4
|
||||
ports:
|
||||
- name: graylog-syslog
|
||||
port: 514
|
||||
protocol: TCP
|
||||
selector:
|
||||
app: graylog
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: graylog-syslog-udp
|
||||
labels:
|
||||
app: graylog
|
||||
annotations:
|
||||
external-dns.alpha.kubernetes.io/hostname: syslog.k-space.ee
|
||||
metallb.universe.tf/allow-shared-ip: syslog.k-space.ee
|
||||
spec:
|
||||
type: LoadBalancer
|
||||
externalTrafficPolicy: Local
|
||||
loadBalancerIP: 172.20.51.4
|
||||
ports:
|
||||
- name: graylog-syslog
|
||||
port: 514
|
||||
protocol: UDP
|
||||
selector:
|
||||
app: graylog
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: graylog
|
||||
labels:
|
||||
app: graylog
|
||||
spec:
|
||||
ports:
|
||||
- name: graylog
|
||||
port: 9000
|
||||
protocol: TCP
|
||||
selector:
|
||||
app: graylog
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: StatefulSet
|
||||
metadata:
|
||||
name: graylog
|
||||
labels:
|
||||
app: graylog
|
||||
annotations:
|
||||
keel.sh/policy: minor
|
||||
keel.sh/trigger: poll
|
||||
keel.sh/pollSchedule: "@midnight"
|
||||
spec:
|
||||
serviceName: graylog
|
||||
revisionHistoryLimit: 0
|
||||
replicas: 1
|
||||
selector:
|
||||
matchLabels:
|
||||
app: graylog
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: graylog
|
||||
annotations:
|
||||
prometheus.io/port: "9833"
|
||||
prometheus.io/scrape: "true"
|
||||
spec:
|
||||
securityContext:
|
||||
fsGroup: 1100
|
||||
volumes:
|
||||
- name: graylog-config
|
||||
downwardAPI:
|
||||
items:
|
||||
- path: id
|
||||
fieldRef:
|
||||
fieldPath: metadata.name
|
||||
containers:
|
||||
- name: graylog
|
||||
image: graylog/graylog:4.3
|
||||
env:
|
||||
- name: GRAYLOG_MONGODB_URI
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: mongodb-application-readwrite
|
||||
key: connectionString.standard
|
||||
- name: GRAYLOG_PROMETHEUS_EXPORTER_ENABLED
|
||||
value: "true"
|
||||
- name: GRAYLOG_PROMETHEUS_EXPORTER_BIND_ADDRESS
|
||||
value: "0.0.0.0:9833"
|
||||
- name: GRAYLOG_NODE_ID_FILE
|
||||
value: /config/id
|
||||
- name: GRAYLOG_HTTP_EXTERNAL_URI
|
||||
value: "https://graylog.k-space.ee/"
|
||||
- name: GRAYLOG_TRUSTED_PROXIES
|
||||
value: "0.0.0.0/0"
|
||||
- name: GRAYLOG_ELASTICSEARCH_HOSTS
|
||||
value: "http://elasticsearch"
|
||||
- name: GRAYLOG_MESSAGE_JOURNAL_ENABLED
|
||||
value: "false"
|
||||
- name: GRAYLOG_ROTATION_STRATEGY
|
||||
value: "size"
|
||||
- name: GRAYLOG_ELASTICSEARCH_MAX_SIZE_PER_INDEX
|
||||
value: "268435456"
|
||||
- name: GRAYLOG_ELASTICSEARCH_MAX_NUMBER_OF_INDICES
|
||||
value: "16"
|
||||
envFrom:
|
||||
- secretRef:
|
||||
name: graylog-secrets
|
||||
securityContext:
|
||||
runAsNonRoot: true
|
||||
runAsUser: 1100
|
||||
ports:
|
||||
- containerPort: 9000
|
||||
name: graylog
|
||||
- containerPort: 9833
|
||||
name: graylog-metrics
|
||||
livenessProbe:
|
||||
httpGet:
|
||||
path: /api/system/lbstatus
|
||||
port: 9000
|
||||
initialDelaySeconds: 5
|
||||
periodSeconds: 30
|
||||
failureThreshold: 3
|
||||
successThreshold: 1
|
||||
timeoutSeconds: 5
|
||||
readinessProbe:
|
||||
httpGet:
|
||||
path: /api/system/lbstatus
|
||||
port: 9000
|
||||
initialDelaySeconds: 5
|
||||
periodSeconds: 10
|
||||
failureThreshold: 3
|
||||
successThreshold: 1
|
||||
timeoutSeconds: 5
|
||||
volumeMounts:
|
||||
- name: graylog-config
|
||||
mountPath: /config
|
||||
---
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: graylog
|
||||
annotations:
|
||||
cert-manager.io/cluster-issuer: default
|
||||
traefik.ingress.kubernetes.io/router.entrypoints: websecure
|
||||
traefik.ingress.kubernetes.io/router.tls: "true"
|
||||
external-dns.alpha.kubernetes.io/target: traefik.k-space.ee
|
||||
traefik.ingress.kubernetes.io/router.middlewares: traefik-sso@kubernetescrd
|
||||
spec:
|
||||
rules:
|
||||
- host: graylog.k-space.ee
|
||||
http:
|
||||
paths:
|
||||
- pathType: Prefix
|
||||
path: "/"
|
||||
backend:
|
||||
service:
|
||||
name: graylog
|
||||
port:
|
||||
number: 9000
|
||||
tls:
|
||||
- hosts:
|
||||
- graylog.k-space.ee
|
||||
secretName: graylog-tls
|
||||
---
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: NetworkPolicy
|
||||
metadata:
|
||||
name: graylog
|
||||
spec:
|
||||
podSelector:
|
||||
matchLabels:
|
||||
app: graylog
|
||||
policyTypes:
|
||||
- Ingress
|
||||
- Egress
|
||||
egress:
|
||||
- to:
|
||||
- podSelector:
|
||||
matchLabels:
|
||||
app: elasticsearch
|
||||
ports:
|
||||
- port: 9200
|
||||
- to:
|
||||
- podSelector:
|
||||
matchLabels:
|
||||
app: mongodb-svc
|
||||
ports:
|
||||
- port: 27017
|
||||
ingress:
|
||||
- from:
|
||||
- ipBlock:
|
||||
cidr: 172.23.0.0/16
|
||||
- ipBlock:
|
||||
cidr: 172.21.0.0/16
|
||||
- ipBlock:
|
||||
cidr: 100.102.0.0/16
|
||||
ports:
|
||||
- protocol: UDP
|
||||
port: 514
|
||||
- protocol: TCP
|
||||
port: 514
|
||||
- from:
|
||||
- podSelector:
|
||||
matchLabels:
|
||||
app: filebeat
|
||||
ports:
|
||||
- protocol: TCP
|
||||
port: 5044
|
||||
- from:
|
||||
- namespaceSelector:
|
||||
matchLabels:
|
||||
kubernetes.io/metadata.name: monitoring
|
||||
podSelector:
|
||||
matchLabels:
|
||||
app: prometheus
|
||||
ports:
|
||||
- port: 9833
|
||||
- from:
|
||||
- namespaceSelector:
|
||||
matchLabels:
|
||||
kubernetes.io/metadata.name: traefik
|
||||
podSelector:
|
||||
matchLabels:
|
||||
app.kubernetes.io/name: traefik
|
||||
ports:
|
||||
- protocol: TCP
|
||||
port: 9000
|
||||
---
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: NetworkPolicy
|
||||
metadata:
|
||||
name: elasticsearch
|
||||
spec:
|
||||
podSelector:
|
||||
matchLabels:
|
||||
app: elasticsearch
|
||||
policyTypes:
|
||||
- Ingress
|
||||
- Egress
|
||||
ingress:
|
||||
- from:
|
||||
- podSelector:
|
||||
matchLabels:
|
||||
app: graylog
|
||||
- from:
|
||||
- namespaceSelector:
|
||||
matchLabels:
|
||||
kubernetes.io/metadata.name: monitoring
|
||||
podSelector:
|
||||
matchLabels:
|
||||
app: grafana
|
||||
egress:
|
||||
- to:
|
||||
- ipBlock:
|
||||
# geoip.elastic.co updates
|
||||
cidr: 0.0.0.0/0
|
||||
ports:
|
||||
- port: 443
|
||||
---
|
||||
apiVersion: mongodbcommunity.mongodb.com/v1
|
||||
kind: MongoDBCommunity
|
||||
metadata:
|
||||
name: mongodb
|
||||
spec:
|
||||
members: 3
|
||||
type: ReplicaSet
|
||||
version: "5.0.9"
|
||||
security:
|
||||
authentication:
|
||||
modes: ["SCRAM"]
|
||||
users:
|
||||
- name: readwrite
|
||||
db: application
|
||||
passwordSecretRef:
|
||||
name: mongodb-application-readwrite-password
|
||||
roles:
|
||||
- name: readWrite
|
||||
db: application
|
||||
scramCredentialsSecretName: mongodb-application-readwrite
|
||||
- name: readonly
|
||||
db: application
|
||||
passwordSecretRef:
|
||||
name: mongodb-application-readonly-password
|
||||
roles:
|
||||
- name: readOnly
|
||||
db: application
|
||||
scramCredentialsSecretName: mongodb-application-readonly
|
||||
statefulSet:
|
||||
spec:
|
||||
template:
|
||||
spec:
|
||||
affinity:
|
||||
podAntiAffinity:
|
||||
requiredDuringSchedulingIgnoredDuringExecution:
|
||||
- labelSelector:
|
||||
matchExpressions:
|
||||
- key: app
|
||||
operator: In
|
||||
values:
|
||||
- mongodb-svc
|
||||
topologyKey: kubernetes.io/hostname
|
||||
nodeSelector:
|
||||
dedicated: storage
|
||||
tolerations:
|
||||
- key: dedicated
|
||||
operator: Equal
|
||||
value: storage
|
||||
effect: NoSchedule
|
||||
volumeClaimTemplates:
|
||||
- metadata:
|
||||
name: logs-volume
|
||||
spec:
|
||||
storageClassName: local-path
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
resources:
|
||||
requests:
|
||||
storage: 512Mi
|
||||
- metadata:
|
||||
name: data-volume
|
||||
spec:
|
||||
storageClassName: local-path
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
resources:
|
||||
requests:
|
||||
storage: 2Gi
|
@ -6,15 +6,18 @@ metadata:
|
||||
namespace: logging
|
||||
data:
|
||||
filebeat.yml: |-
|
||||
logging:
|
||||
level: warning
|
||||
setup:
|
||||
ilm:
|
||||
enabled: false
|
||||
template:
|
||||
name: filebeat
|
||||
pattern: filebeat-*
|
||||
http.enabled: true
|
||||
filebeat.inputs:
|
||||
- type: container
|
||||
paths:
|
||||
- /var/log/containers/*.log
|
||||
processors:
|
||||
- add_kubernetes_metadata:
|
||||
in_cluster: true
|
||||
host: ${NODE_NAME}
|
||||
matchers:
|
||||
- logs_path:
|
||||
logs_path: "/var/log/containers/"
|
||||
filebeat.autodiscover:
|
||||
providers:
|
||||
- type: kubernetes
|
||||
@ -24,24 +27,50 @@ data:
|
||||
type: container
|
||||
paths:
|
||||
- /var/log/containers/*${data.kubernetes.container.id}.log
|
||||
output:
|
||||
elasticsearch:
|
||||
hosts:
|
||||
- http://zinc:4080
|
||||
path: "/es/"
|
||||
index: "filebeat-%{+yyyy.MM.dd}"
|
||||
username: "${ZINC_FIRST_ADMIN_USER}"
|
||||
password: "${ZINC_FIRST_ADMIN_PASSWORD}"
|
||||
processors:
|
||||
- add_host_metadata:
|
||||
- drop_fields:
|
||||
fields:
|
||||
- stream
|
||||
ignore_missing: true
|
||||
- rename:
|
||||
fields:
|
||||
- from: "kubernetes.node.name"
|
||||
to: "source"
|
||||
- from: "kubernetes.pod.name"
|
||||
to: "pod"
|
||||
- from: "stream"
|
||||
to: "stream"
|
||||
- from: "kubernetes.labels.app"
|
||||
to: "app"
|
||||
- from: "kubernetes.namespace"
|
||||
to: "namespace"
|
||||
ignore_missing: true
|
||||
- drop_fields:
|
||||
fields:
|
||||
- agent
|
||||
- container
|
||||
- ecs
|
||||
- host
|
||||
- kubernetes
|
||||
- log
|
||||
- "@metadata"
|
||||
ignore_missing: true
|
||||
output.logstash:
|
||||
hosts: ["graylog-logstash:5044"]
|
||||
#output.console:
|
||||
# pretty: true
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: DaemonSet
|
||||
metadata:
|
||||
name: filebeat
|
||||
namespace: logging
|
||||
spec:
|
||||
updateStrategy:
|
||||
type: RollingUpdate
|
||||
rollingUpdate:
|
||||
maxUnavailable: 50%
|
||||
maxUnavailable: 100%
|
||||
selector:
|
||||
matchLabels:
|
||||
app: filebeat
|
||||
@ -49,86 +78,72 @@ spec:
|
||||
metadata:
|
||||
labels:
|
||||
app: filebeat
|
||||
annotations:
|
||||
co.elastic.logs/json.keys_under_root: "true"
|
||||
spec:
|
||||
serviceAccountName: filebeat
|
||||
containers:
|
||||
- name: filebeat
|
||||
image: docker.elastic.co/beats/filebeat:8.4.1
|
||||
args:
|
||||
- -c
|
||||
- /etc/filebeat.yml
|
||||
- -e
|
||||
securityContext:
|
||||
runAsUser: 0
|
||||
env:
|
||||
- name: NODE_NAME
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: spec.nodeName
|
||||
- name: ZINC_FIRST_ADMIN_USER
|
||||
value: admin
|
||||
- name: ZINC_FIRST_ADMIN_PASSWORD
|
||||
value: salakala
|
||||
ports:
|
||||
- containerPort: 5066
|
||||
resources:
|
||||
limits:
|
||||
memory: 200Mi
|
||||
requests:
|
||||
cpu: 100m
|
||||
memory: 100Mi
|
||||
volumeMounts:
|
||||
- name: filebeat-config
|
||||
mountPath: /etc/filebeat.yml
|
||||
readOnly: true
|
||||
subPath: filebeat.yml
|
||||
- name: data
|
||||
mountPath: /usr/share/filebeat/data
|
||||
- name: varlibdockercontainers
|
||||
mountPath: /var/lib/docker/containers
|
||||
readOnly: true
|
||||
- name: varlog
|
||||
mountPath: /var/log
|
||||
readOnly: true
|
||||
- name: exporter
|
||||
image: sepa/beats-exporter
|
||||
args:
|
||||
- -p=5066
|
||||
ports:
|
||||
- containerPort: 8080
|
||||
name: exporter
|
||||
protocol: TCP
|
||||
volumes:
|
||||
- name: filebeat
|
||||
image: docker.elastic.co/beats/filebeat:7.17.6
|
||||
args:
|
||||
- -c
|
||||
- /etc/filebeat.yml
|
||||
- -e
|
||||
securityContext:
|
||||
runAsUser: 0
|
||||
env:
|
||||
- name: NODE_NAME
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: spec.nodeName
|
||||
ports:
|
||||
- containerPort: 5066
|
||||
resources:
|
||||
limits:
|
||||
memory: 200Mi
|
||||
requests:
|
||||
cpu: 100m
|
||||
memory: 100Mi
|
||||
volumeMounts:
|
||||
- name: filebeat-config
|
||||
configMap:
|
||||
defaultMode: 0600
|
||||
name: filebeat-config
|
||||
- name: varlibdockercontainers
|
||||
hostPath:
|
||||
path: /var/lib/docker/containers
|
||||
- name: varlog
|
||||
hostPath:
|
||||
path: /var/log
|
||||
mountPath: /etc/filebeat.yml
|
||||
readOnly: true
|
||||
subPath: filebeat.yml
|
||||
- name: data
|
||||
hostPath:
|
||||
path: /var/lib/filebeat-data
|
||||
type: DirectoryOrCreate
|
||||
mountPath: /usr/share/filebeat/data
|
||||
- name: varlibdockercontainers
|
||||
mountPath: /var/lib/docker/containers
|
||||
readOnly: true
|
||||
- name: varlog
|
||||
mountPath: /var/log
|
||||
readOnly: true
|
||||
volumes:
|
||||
- name: filebeat-config
|
||||
configMap:
|
||||
defaultMode: 0600
|
||||
name: filebeat-config
|
||||
- name: varlibdockercontainers
|
||||
hostPath:
|
||||
path: /var/lib/docker/containers
|
||||
- name: varlog
|
||||
hostPath:
|
||||
path: /var/log
|
||||
- name: data
|
||||
hostPath:
|
||||
path: /var/lib/filebeat-data
|
||||
type: DirectoryOrCreate
|
||||
tolerations:
|
||||
- operator: "Exists"
|
||||
effect: "NoExecute"
|
||||
- operator: "Exists"
|
||||
effect: "NoSchedule"
|
||||
- operator: "Exists"
|
||||
effect: "NoExecute"
|
||||
- operator: "Exists"
|
||||
effect: "NoSchedule"
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRoleBinding
|
||||
metadata:
|
||||
name: logging-filebeat
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: filebeat
|
||||
namespace: logging
|
||||
- kind: ServiceAccount
|
||||
name: filebeat
|
||||
namespace: logging
|
||||
roleRef:
|
||||
kind: ClusterRole
|
||||
name: filebeat
|
||||
@ -151,35 +166,13 @@ spec:
|
||||
matchLabels:
|
||||
app: filebeat
|
||||
policyTypes:
|
||||
- Ingress
|
||||
- Egress
|
||||
ingress:
|
||||
- from:
|
||||
- namespaceSelector:
|
||||
matchLabels:
|
||||
kubernetes.io/metadata.name: prometheus-operator
|
||||
podSelector:
|
||||
matchLabels:
|
||||
app.kubernetes.io/name: prometheus
|
||||
ports:
|
||||
- protocol: TCP
|
||||
port: 8080
|
||||
- Ingress
|
||||
- Egress
|
||||
egress:
|
||||
- to:
|
||||
- podSelector:
|
||||
matchLabels:
|
||||
app: zinc
|
||||
ports:
|
||||
- protocol: TCP
|
||||
port: 4080
|
||||
---
|
||||
apiVersion: monitoring.coreos.com/v1
|
||||
kind: PodMonitor
|
||||
metadata:
|
||||
name: filebeat
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
app: filebeat
|
||||
podMetricsEndpoints:
|
||||
- port: exporter
|
||||
- to:
|
||||
- podSelector:
|
||||
matchLabels:
|
||||
app: graylog
|
||||
ports:
|
||||
- protocol: TCP
|
||||
port: 5044
|
||||
|
122
logging/zinc.yml
122
logging/zinc.yml
@ -1,122 +0,0 @@
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: zinc
|
||||
spec:
|
||||
clusterIP: None
|
||||
selector:
|
||||
app: zinc
|
||||
ports:
|
||||
- name: http
|
||||
port: 4080
|
||||
targetPort: 4080
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: StatefulSet
|
||||
metadata:
|
||||
name: zinc
|
||||
spec:
|
||||
serviceName: zinc
|
||||
replicas: 1
|
||||
selector:
|
||||
matchLabels:
|
||||
app: zinc
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: zinc
|
||||
spec:
|
||||
securityContext:
|
||||
fsGroup: 2000
|
||||
runAsUser: 10000
|
||||
runAsGroup: 3000
|
||||
runAsNonRoot: true
|
||||
containers:
|
||||
- name: zinc
|
||||
image: public.ecr.aws/zinclabs/zinc:latest
|
||||
env:
|
||||
- name: GIN_MODE
|
||||
value: release
|
||||
- name: ZINC_FIRST_ADMIN_USER
|
||||
value: admin
|
||||
- name: ZINC_FIRST_ADMIN_PASSWORD
|
||||
value: salakala
|
||||
- name: ZINC_DATA_PATH
|
||||
value: /data
|
||||
imagePullPolicy: Always
|
||||
resources:
|
||||
limits:
|
||||
cpu: "4"
|
||||
memory: 4Gi
|
||||
requests:
|
||||
cpu: 32m
|
||||
memory: 50Mi
|
||||
ports:
|
||||
- containerPort: 4080
|
||||
name: http
|
||||
volumeMounts:
|
||||
- name: data
|
||||
mountPath: /data
|
||||
volumeClaimTemplates:
|
||||
- metadata:
|
||||
name: data
|
||||
spec:
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
storageClassName: longhorn
|
||||
resources:
|
||||
requests:
|
||||
storage: 20Gi
|
||||
---
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: zinc
|
||||
annotations:
|
||||
cert-manager.io/cluster-issuer: default
|
||||
traefik.ingress.kubernetes.io/router.entrypoints: websecure
|
||||
traefik.ingress.kubernetes.io/router.tls: "true"
|
||||
external-dns.alpha.kubernetes.io/target: traefik.k-space.ee
|
||||
traefik.ingress.kubernetes.io/router.middlewares: traefik-sso@kubernetescrd
|
||||
spec:
|
||||
rules:
|
||||
- host: zinc.k-space.ee
|
||||
http:
|
||||
paths:
|
||||
- pathType: Prefix
|
||||
path: "/"
|
||||
backend:
|
||||
service:
|
||||
name: zinc
|
||||
port:
|
||||
number: 4080
|
||||
tls:
|
||||
- hosts:
|
||||
- zinc.k-space.ee
|
||||
secretName: zinc-tls
|
||||
---
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: NetworkPolicy
|
||||
metadata:
|
||||
name: zinc
|
||||
spec:
|
||||
podSelector:
|
||||
matchLabels:
|
||||
app: zinc
|
||||
policyTypes:
|
||||
- Ingress
|
||||
ingress:
|
||||
- from:
|
||||
- podSelector:
|
||||
matchLabels:
|
||||
app: filebeat
|
||||
ports:
|
||||
- protocol: TCP
|
||||
port: 4080
|
||||
- from:
|
||||
- namespaceSelector:
|
||||
matchLabels:
|
||||
kubernetes.io/metadata.name: traefik
|
||||
podSelector:
|
||||
matchLabels:
|
||||
app.kubernetes.io/name: traefik
|
@ -1,491 +0,0 @@
|
||||
---
|
||||
apiVersion: codemowers.io/v1alpha1
|
||||
kind: GeneratedSecret
|
||||
metadata:
|
||||
name: logmower-readwrite-password
|
||||
spec:
|
||||
mapping:
|
||||
- key: password
|
||||
value: "%(password)s"
|
||||
---
|
||||
apiVersion: codemowers.io/v1alpha1
|
||||
kind: GeneratedSecret
|
||||
metadata:
|
||||
name: logmower-readonly-password
|
||||
spec:
|
||||
mapping:
|
||||
- key: password
|
||||
value: "%(password)s"
|
||||
---
|
||||
apiVersion: mongodbcommunity.mongodb.com/v1
|
||||
kind: MongoDBCommunity
|
||||
metadata:
|
||||
name: logmower-mongodb
|
||||
spec:
|
||||
additionalMongodConfig:
|
||||
systemLog:
|
||||
quiet: true
|
||||
members: 2
|
||||
arbiters: 1
|
||||
type: ReplicaSet
|
||||
version: "6.0.3"
|
||||
security:
|
||||
authentication:
|
||||
modes: ["SCRAM"]
|
||||
users:
|
||||
- name: readwrite
|
||||
db: application
|
||||
passwordSecretRef:
|
||||
name: logmower-readwrite-password
|
||||
roles:
|
||||
- name: readWrite
|
||||
db: application
|
||||
scramCredentialsSecretName: logmower-readwrite
|
||||
- name: readonly
|
||||
db: application
|
||||
passwordSecretRef:
|
||||
name: logmower-readonly-password
|
||||
roles:
|
||||
- name: read
|
||||
db: application
|
||||
scramCredentialsSecretName: logmower-readonly
|
||||
statefulSet:
|
||||
spec:
|
||||
logLevel: WARN
|
||||
template:
|
||||
spec:
|
||||
containers:
|
||||
- name: mongod
|
||||
resources:
|
||||
requests:
|
||||
cpu: 100m
|
||||
memory: 1Gi
|
||||
limits:
|
||||
cpu: 4000m
|
||||
memory: 1Gi
|
||||
volumeMounts:
|
||||
- name: journal-volume
|
||||
mountPath: /data/journal
|
||||
- name: mongodb-agent
|
||||
resources:
|
||||
requests:
|
||||
cpu: 1m
|
||||
memory: 100Mi
|
||||
limits: {}
|
||||
affinity:
|
||||
podAntiAffinity:
|
||||
requiredDuringSchedulingIgnoredDuringExecution:
|
||||
- labelSelector:
|
||||
matchExpressions:
|
||||
- key: app
|
||||
operator: In
|
||||
values:
|
||||
- logmower-mongodb-svc
|
||||
topologyKey: kubernetes.io/hostname
|
||||
nodeSelector:
|
||||
dedicated: monitoring
|
||||
tolerations:
|
||||
- key: dedicated
|
||||
operator: Equal
|
||||
value: monitoring
|
||||
effect: NoSchedule
|
||||
volumeClaimTemplates:
|
||||
- metadata:
|
||||
name: logs-volume
|
||||
labels:
|
||||
usecase: logs
|
||||
spec:
|
||||
storageClassName: mongo
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
resources:
|
||||
requests:
|
||||
storage: 100Mi
|
||||
- metadata:
|
||||
name: journal-volume
|
||||
labels:
|
||||
usecase: journal
|
||||
spec:
|
||||
storageClassName: mongo
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
resources:
|
||||
requests:
|
||||
storage: 512Mi
|
||||
- metadata:
|
||||
name: data-volume
|
||||
labels:
|
||||
usecase: data
|
||||
spec:
|
||||
storageClassName: mongo
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
resources:
|
||||
requests:
|
||||
storage: 10Gi
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: DaemonSet
|
||||
metadata:
|
||||
name: logmower-shipper
|
||||
spec:
|
||||
updateStrategy:
|
||||
type: RollingUpdate
|
||||
rollingUpdate:
|
||||
maxUnavailable: 50%
|
||||
selector:
|
||||
matchLabels:
|
||||
app: logmower-shipper
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: logmower-shipper
|
||||
spec:
|
||||
serviceAccountName: logmower-shipper
|
||||
containers:
|
||||
- name: logmower-shipper
|
||||
image: harbor.k-space.ee/k-space/logmower-shipper-prototype:latest
|
||||
env:
|
||||
- name: NODE_NAME
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: spec.nodeName
|
||||
- name: MONGO_URI
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: logmower-mongodb-application-readwrite
|
||||
key: connectionString.standard
|
||||
ports:
|
||||
- containerPort: 8000
|
||||
name: metrics
|
||||
securityContext:
|
||||
readOnlyRootFilesystem: true
|
||||
command:
|
||||
- /app/log_shipper.py
|
||||
- --parse-json
|
||||
- --normalize-log-level
|
||||
- --stream-to-log-level
|
||||
- --merge-top-level
|
||||
- --max-collection-size
|
||||
- "10000000000"
|
||||
resources:
|
||||
limits:
|
||||
memory: 200Mi
|
||||
requests:
|
||||
cpu: 100m
|
||||
memory: 100Mi
|
||||
volumeMounts:
|
||||
- name: etcmachineid
|
||||
mountPath: /etc/machine-id
|
||||
readOnly: true
|
||||
- name: varlog
|
||||
mountPath: /var/log
|
||||
readOnly: true
|
||||
volumes:
|
||||
- name: etcmachineid
|
||||
hostPath:
|
||||
path: /etc/machine-id
|
||||
- name: varlog
|
||||
hostPath:
|
||||
path: /var/log
|
||||
tolerations:
|
||||
- operator: "Exists"
|
||||
effect: "NoSchedule"
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRoleBinding
|
||||
metadata:
|
||||
name: logging-logmower-shipper
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: logmower-shipper
|
||||
namespace: logmower
|
||||
roleRef:
|
||||
kind: ClusterRole
|
||||
name: filebeat
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
name: logmower-shipper
|
||||
labels:
|
||||
app: logmower-shipper
|
||||
---
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: NetworkPolicy
|
||||
metadata:
|
||||
name: logmower-shipper
|
||||
spec:
|
||||
podSelector:
|
||||
matchLabels:
|
||||
app: logmower-shipper
|
||||
policyTypes:
|
||||
- Ingress
|
||||
- Egress
|
||||
ingress:
|
||||
- from:
|
||||
- namespaceSelector:
|
||||
matchLabels:
|
||||
kubernetes.io/metadata.name: prometheus-operator
|
||||
podSelector:
|
||||
matchLabels:
|
||||
app.kubernetes.io/name: prometheus
|
||||
egress:
|
||||
- to:
|
||||
- podSelector:
|
||||
matchLabels:
|
||||
app: logmower-mongodb-svc
|
||||
ports:
|
||||
- port: 27017
|
||||
---
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: NetworkPolicy
|
||||
metadata:
|
||||
name: logmower-eventsource
|
||||
spec:
|
||||
podSelector:
|
||||
matchLabels:
|
||||
app: logmower-eventsource
|
||||
policyTypes:
|
||||
- Ingress
|
||||
- Egress
|
||||
egress:
|
||||
- to:
|
||||
- podSelector:
|
||||
matchLabels:
|
||||
app: logmower-mongodb-svc
|
||||
ingress:
|
||||
- from:
|
||||
- namespaceSelector:
|
||||
matchLabels:
|
||||
kubernetes.io/metadata.name: traefik
|
||||
podSelector:
|
||||
matchLabels:
|
||||
app.kubernetes.io/name: traefik
|
||||
---
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: NetworkPolicy
|
||||
metadata:
|
||||
name: logmower-frontend
|
||||
spec:
|
||||
podSelector:
|
||||
matchLabels:
|
||||
app: logmower-frontend
|
||||
policyTypes:
|
||||
- Ingress
|
||||
- Egress
|
||||
ingress:
|
||||
- from:
|
||||
- namespaceSelector:
|
||||
matchLabels:
|
||||
kubernetes.io/metadata.name: traefik
|
||||
podSelector:
|
||||
matchLabels:
|
||||
app.kubernetes.io/name: traefik
|
||||
---
|
||||
apiVersion: monitoring.coreos.com/v1
|
||||
kind: PodMonitor
|
||||
metadata:
|
||||
name: logmower-shipper
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
app: logmower-shipper
|
||||
podMetricsEndpoints:
|
||||
- port: metrics
|
||||
---
|
||||
apiVersion: monitoring.coreos.com/v1
|
||||
kind: PrometheusRule
|
||||
metadata:
|
||||
name: logmower-shipper
|
||||
spec:
|
||||
groups:
|
||||
- name: logmower-shipper
|
||||
rules:
|
||||
- alert: LogmowerSingleInsertionErrors
|
||||
annotations:
|
||||
summary: Logmower shipper is having issues submitting log records
|
||||
to database
|
||||
expr: rate(logmower_insertion_error_count_total[30m]) > 0
|
||||
for: 0m
|
||||
labels:
|
||||
severity: warning
|
||||
- alert: LogmowerBulkInsertionErrors
|
||||
annotations:
|
||||
summary: Logmower shipper is having issues submitting log records
|
||||
to database
|
||||
expr: rate(logmower_bulk_insertion_error_count_total[30m]) > 0
|
||||
for: 0m
|
||||
labels:
|
||||
severity: warning
|
||||
- alert: LogmowerHighDatabaseLatency
|
||||
annotations:
|
||||
summary: Database operations are slow
|
||||
expr: histogram_quantile(0.95, logmower_database_operation_latency_bucket) > 10
|
||||
for: 1m
|
||||
labels:
|
||||
severity: warning
|
||||
---
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: logmower
|
||||
annotations:
|
||||
kubernetes.io/ingress.class: traefik
|
||||
cert-manager.io/cluster-issuer: default
|
||||
traefik.ingress.kubernetes.io/router.entrypoints: websecure
|
||||
traefik.ingress.kubernetes.io/router.middlewares: traefik-sso@kubernetescrd
|
||||
traefik.ingress.kubernetes.io/router.tls: "true"
|
||||
external-dns.alpha.kubernetes.io/target: traefik.k-space.ee
|
||||
spec:
|
||||
rules:
|
||||
- host: log.k-space.ee
|
||||
http:
|
||||
paths:
|
||||
- pathType: Prefix
|
||||
path: "/events"
|
||||
backend:
|
||||
service:
|
||||
name: logmower-eventsource
|
||||
port:
|
||||
number: 3002
|
||||
- pathType: Prefix
|
||||
path: "/"
|
||||
backend:
|
||||
service:
|
||||
name: logmower-frontend
|
||||
port:
|
||||
number: 8080
|
||||
tls:
|
||||
- hosts:
|
||||
- "*.k-space.ee"
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: logmower-eventsource
|
||||
spec:
|
||||
type: ClusterIP
|
||||
selector:
|
||||
app: logmower-eventsource
|
||||
ports:
|
||||
- protocol: TCP
|
||||
port: 3002
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: logmower-frontend
|
||||
spec:
|
||||
type: ClusterIP
|
||||
selector:
|
||||
app: logmower-frontend
|
||||
ports:
|
||||
- protocol: TCP
|
||||
port: 8080
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: logmower-frontend
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
app: logmower-frontend
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: logmower-frontend
|
||||
spec:
|
||||
containers:
|
||||
- name: logmower-frontend
|
||||
image: harbor.k-space.ee/k-space/logmower-frontend
|
||||
ports:
|
||||
- containerPort: 8080
|
||||
name: http
|
||||
securityContext:
|
||||
readOnlyRootFilesystem: true
|
||||
runAsNonRoot: true
|
||||
runAsUser: 1000
|
||||
resources:
|
||||
limits:
|
||||
memory: 50Mi
|
||||
requests:
|
||||
cpu: 1m
|
||||
memory: 20Mi
|
||||
volumeMounts:
|
||||
- name : nginx-cache
|
||||
mountPath: /var/cache/nginx/
|
||||
- name : nginx-config
|
||||
mountPath: /var/config/nginx/
|
||||
- name: var-run
|
||||
mountPath: /var/run/
|
||||
volumes:
|
||||
- emptyDir: {}
|
||||
name: nginx-cache
|
||||
- emptyDir: {}
|
||||
name: nginx-config
|
||||
- emptyDir: {}
|
||||
name: var-run
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: logmower-eventsource
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
app: logmower-eventsource
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: logmower-eventsource
|
||||
spec:
|
||||
containers:
|
||||
- name: logmower-eventsource
|
||||
image: harbor.k-space.ee/k-space/logmower-eventsource
|
||||
ports:
|
||||
- containerPort: 3002
|
||||
name: nodejs
|
||||
securityContext:
|
||||
readOnlyRootFilesystem: true
|
||||
runAsNonRoot: true
|
||||
runAsUser: 1000
|
||||
resources:
|
||||
limits:
|
||||
cpu: 500m
|
||||
memory: 200Mi
|
||||
requests:
|
||||
cpu: 10m
|
||||
memory: 100Mi
|
||||
env:
|
||||
- name: MONGODB_HOST
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: logmower-mongodb-application-readonly
|
||||
key: connectionString.standard
|
||||
---
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: NetworkPolicy
|
||||
metadata:
|
||||
name: logmower-mongodb
|
||||
spec:
|
||||
podSelector:
|
||||
matchLabels:
|
||||
app: logmower-mongodb-svc
|
||||
policyTypes:
|
||||
- Ingress
|
||||
- Egress
|
||||
ingress:
|
||||
- from:
|
||||
- podSelector: {}
|
||||
ports:
|
||||
- port: 27017
|
||||
egress:
|
||||
- to:
|
||||
- podSelector:
|
||||
matchLabels:
|
||||
app: logmower-mongodb-svc
|
||||
ports:
|
||||
- port: 27017
|
@ -1,47 +0,0 @@
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: logmower-mongoexpress
|
||||
spec:
|
||||
revisionHistoryLimit: 0
|
||||
replicas: 1
|
||||
selector:
|
||||
matchLabels:
|
||||
app: logmower-mongoexpress
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: logmower-mongoexpress
|
||||
spec:
|
||||
containers:
|
||||
- name: mongoexpress
|
||||
image: mongo-express
|
||||
ports:
|
||||
- name: mongoexpress
|
||||
containerPort: 8081
|
||||
env:
|
||||
- name: ME_CONFIG_MONGODB_URL
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: logmower-mongodb-application-readonly
|
||||
key: connectionString.standard
|
||||
- name: ME_CONFIG_MONGODB_ENABLE_ADMIN
|
||||
value: "true"
|
||||
---
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: NetworkPolicy
|
||||
metadata:
|
||||
name: logmower-mongoexpress
|
||||
spec:
|
||||
podSelector:
|
||||
matchLabels:
|
||||
app: logmower-mongoexpress
|
||||
policyTypes:
|
||||
- Egress
|
||||
egress:
|
||||
- to:
|
||||
- podSelector:
|
||||
matchLabels:
|
||||
app: logmower-mongodb-svc
|
||||
ports:
|
||||
- port: 27017
|
@ -1 +0,0 @@
|
||||
../shared/networkpolicy-base.yml
|
@ -1,8 +1,8 @@
|
||||
# Longhorn distributed block storage system
|
||||
|
||||
The manifest was fetched from
|
||||
https://raw.githubusercontent.com/longhorn/longhorn/v1.4.0/deploy/longhorn.yaml
|
||||
and then heavily modified as per `changes.diff`
|
||||
https://raw.githubusercontent.com/longhorn/longhorn/v1.2.4/deploy/longhorn.yaml
|
||||
and then heavily modified.
|
||||
|
||||
To deploy Longhorn use following:
|
||||
|
||||
|
@ -5,6 +5,7 @@ metadata:
|
||||
namespace: longhorn-system
|
||||
annotations:
|
||||
kubernetes.io/ingress.class: traefik
|
||||
cert-manager.io/cluster-issuer: default
|
||||
external-dns.alpha.kubernetes.io/target: traefik.k-space.ee
|
||||
traefik.ingress.kubernetes.io/router.entrypoints: websecure
|
||||
traefik.ingress.kubernetes.io/router.middlewares: traefik-sso@kubernetescrd
|
||||
@ -23,7 +24,9 @@ spec:
|
||||
number: 80
|
||||
tls:
|
||||
- hosts:
|
||||
- "*.k-space.ee"
|
||||
- longhorn.k-space.ee
|
||||
secretName: longhorn-tls
|
||||
|
||||
---
|
||||
apiVersion: monitoring.coreos.com/v1
|
||||
kind: PodMonitor
|
||||
|
File diff suppressed because it is too large
Load Diff
@ -1,92 +0,0 @@
|
||||
--- ref 2023-02-20 11:15:07.340650467 +0200
|
||||
+++ application.yml 2023-02-19 18:38:05.059234209 +0200
|
||||
@@ -60,14 +60,14 @@
|
||||
storageclass.kubernetes.io/is-default-class: "true"
|
||||
provisioner: driver.longhorn.io
|
||||
allowVolumeExpansion: true
|
||||
- reclaimPolicy: "Delete"
|
||||
+ reclaimPolicy: "Retain"
|
||||
volumeBindingMode: Immediate
|
||||
parameters:
|
||||
- numberOfReplicas: "3"
|
||||
+ numberOfReplicas: "2"
|
||||
staleReplicaTimeout: "30"
|
||||
fromBackup: ""
|
||||
- fsType: "ext4"
|
||||
- dataLocality: "disabled"
|
||||
+ fsType: "xfs"
|
||||
+ dataLocality: "best-effort"
|
||||
---
|
||||
# Source: longhorn/templates/crds.yaml
|
||||
apiVersion: apiextensions.k8s.io/v1
|
||||
@@ -3869,6 +3869,11 @@
|
||||
app.kubernetes.io/version: v1.4.0
|
||||
app: longhorn-manager
|
||||
spec:
|
||||
+ tolerations:
|
||||
+ - key: dedicated
|
||||
+ operator: Equal
|
||||
+ value: storage
|
||||
+ effect: NoSchedule
|
||||
initContainers:
|
||||
- name: wait-longhorn-admission-webhook
|
||||
image: longhornio/longhorn-manager:v1.4.0
|
||||
@@ -3968,6 +3973,10 @@
|
||||
app.kubernetes.io/version: v1.4.0
|
||||
app: longhorn-driver-deployer
|
||||
spec:
|
||||
+ tolerations:
|
||||
+ - key: dedicated
|
||||
+ operator: Equal
|
||||
+ value: storage
|
||||
initContainers:
|
||||
- name: wait-longhorn-manager
|
||||
image: longhornio/longhorn-manager:v1.4.0
|
||||
@@ -4037,6 +4046,11 @@
|
||||
app.kubernetes.io/version: v1.4.0
|
||||
app: longhorn-recovery-backend
|
||||
spec:
|
||||
+ tolerations:
|
||||
+ - key: dedicated
|
||||
+ operator: Equal
|
||||
+ value: storage
|
||||
+ effect: NoSchedule
|
||||
affinity:
|
||||
podAntiAffinity:
|
||||
preferredDuringSchedulingIgnoredDuringExecution:
|
||||
@@ -4103,6 +4117,11 @@
|
||||
app.kubernetes.io/version: v1.4.0
|
||||
app: longhorn-ui
|
||||
spec:
|
||||
+ tolerations:
|
||||
+ - key: dedicated
|
||||
+ operator: Equal
|
||||
+ value: storage
|
||||
+ effect: NoSchedule
|
||||
affinity:
|
||||
podAntiAffinity:
|
||||
preferredDuringSchedulingIgnoredDuringExecution:
|
||||
@@ -4166,6 +4185,11 @@
|
||||
app.kubernetes.io/version: v1.4.0
|
||||
app: longhorn-conversion-webhook
|
||||
spec:
|
||||
+ tolerations:
|
||||
+ - key: dedicated
|
||||
+ operator: Equal
|
||||
+ value: storage
|
||||
+ effect: NoSchedule
|
||||
affinity:
|
||||
podAntiAffinity:
|
||||
preferredDuringSchedulingIgnoredDuringExecution:
|
||||
@@ -4226,6 +4250,11 @@
|
||||
app.kubernetes.io/version: v1.4.0
|
||||
app: longhorn-admission-webhook
|
||||
spec:
|
||||
+ tolerations:
|
||||
+ - key: dedicated
|
||||
+ operator: Equal
|
||||
+ value: storage
|
||||
+ effect: NoSchedule
|
||||
affinity:
|
||||
podAntiAffinity:
|
||||
preferredDuringSchedulingIgnoredDuringExecution:
|
@ -1,158 +0,0 @@
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: doorboy-proxy
|
||||
annotations:
|
||||
keel.sh/policy: force
|
||||
keel.sh/trigger: poll
|
||||
spec:
|
||||
revisionHistoryLimit: 0
|
||||
replicas: 3
|
||||
selector:
|
||||
matchLabels: &selectorLabels
|
||||
app.kubernetes.io/name: doorboy-proxy
|
||||
template:
|
||||
metadata:
|
||||
labels: *selectorLabels
|
||||
spec:
|
||||
affinity:
|
||||
podAntiAffinity:
|
||||
preferredDuringSchedulingIgnoredDuringExecution:
|
||||
- podAffinityTerm:
|
||||
labelSelector:
|
||||
matchExpressions:
|
||||
- key: app.kubernetes.io/name
|
||||
operator: In
|
||||
values:
|
||||
- doorboy-proxy
|
||||
topologyKey: kubernetes.io/hostname
|
||||
weight: 100
|
||||
containers:
|
||||
- name: doorboy-proxy
|
||||
image: harbor.k-space.ee/k-space/doorboy-proxy:latest
|
||||
envFrom:
|
||||
- secretRef:
|
||||
name: doorboy-api
|
||||
env:
|
||||
- name: MONGO_URI
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: mongo-application-readwrite
|
||||
key: connectionString.standard
|
||||
securityContext:
|
||||
readOnlyRootFilesystem: true
|
||||
runAsNonRoot: true
|
||||
runAsUser: 1000
|
||||
ports:
|
||||
- containerPort: 5000
|
||||
name: "http"
|
||||
resources:
|
||||
requests:
|
||||
memory: "200Mi"
|
||||
cpu: "100m"
|
||||
limits:
|
||||
memory: "500Mi"
|
||||
cpu: "1"
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: doorboy-proxy
|
||||
spec:
|
||||
selector:
|
||||
app.kubernetes.io/name: doorboy-proxy
|
||||
ports:
|
||||
- protocol: TCP
|
||||
name: http
|
||||
port: 5000
|
||||
---
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: doorboy-proxy
|
||||
annotations:
|
||||
kubernetes.io/ingress.class: traefik
|
||||
traefik.ingress.kubernetes.io/router.entrypoints: websecure
|
||||
traefik.ingress.kubernetes.io/router.tls: "true"
|
||||
external-dns.alpha.kubernetes.io/target: traefik.k-space.ee
|
||||
spec:
|
||||
rules:
|
||||
- host: doorboy-proxy.k-space.ee
|
||||
http:
|
||||
paths:
|
||||
- pathType: Prefix
|
||||
path: "/"
|
||||
backend:
|
||||
service:
|
||||
name: doorboy-proxy
|
||||
port:
|
||||
name: http
|
||||
tls:
|
||||
- hosts:
|
||||
- "*.k-space.ee"
|
||||
---
|
||||
apiVersion: monitoring.coreos.com/v1
|
||||
kind: PodMonitor
|
||||
metadata:
|
||||
name: doorboy-proxy
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
app.kubernetes.io/name: doorboy-proxy
|
||||
podMetricsEndpoints:
|
||||
- port: http
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: DaemonSet
|
||||
metadata:
|
||||
name: kdoorpi
|
||||
spec:
|
||||
selector:
|
||||
matchLabels: &selectorLabels
|
||||
app.kubernetes.io/name: kdoorpi
|
||||
template:
|
||||
metadata:
|
||||
labels: *selectorLabels
|
||||
spec:
|
||||
containers:
|
||||
- name: kdoorpi
|
||||
image: harbor.k-space.ee/k-space/kdoorpi:latest
|
||||
env:
|
||||
- name: KDOORPI_API_ALLOWED
|
||||
value: https://doorboy-proxy.k-space.ee/allowed
|
||||
- name: KDOORPI_API_LONGPOLL
|
||||
value: https://doorboy-proxy.k-space.ee/longpoll
|
||||
- name: KDOORPI_API_SWIPE
|
||||
value: http://172.21.99.98/swipe
|
||||
- name: KDOORPI_DOOR
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: spec.nodeName
|
||||
- name: KDOORPI_API_KEY
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: doorboy-api
|
||||
key: DOORBOY_SECRET
|
||||
- name: KDOORPI_UID_SALT
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: doorboy-uid-hash-salt
|
||||
key: KDOORPI_UID_SALT
|
||||
resources:
|
||||
limits:
|
||||
memory: 200Mi
|
||||
requests:
|
||||
cpu: 100m
|
||||
memory: 100Mi
|
||||
nodeSelector:
|
||||
dedicated: door
|
||||
tolerations:
|
||||
- key: dedicated
|
||||
operator: Equal
|
||||
value: door
|
||||
effect: NoSchedule
|
||||
- key: arch
|
||||
operator: Equal
|
||||
value: arm64
|
||||
effect: NoSchedule
|
11
meta-operator/README.md
Normal file
11
meta-operator/README.md
Normal file
@ -0,0 +1,11 @@
|
||||
# meta-operator
|
||||
|
||||
Meta operator enables creating operators without building any binaries or
|
||||
Docker images.
|
||||
|
||||
For example operator declaration see `keydb.yml`
|
||||
|
||||
```
|
||||
kubectl create namespace meta-operator
|
||||
kubectl apply -f application.yml -f keydb.yml
|
||||
```
|
220
meta-operator/application.yml
Normal file
220
meta-operator/application.yml
Normal file
@ -0,0 +1,220 @@
|
||||
apiVersion: apiextensions.k8s.io/v1
|
||||
kind: CustomResourceDefinition
|
||||
metadata:
|
||||
name: clusteroperators.codemowers.io
|
||||
spec:
|
||||
group: codemowers.io
|
||||
names:
|
||||
plural: clusteroperators
|
||||
singular: clusteroperator
|
||||
kind: ClusterOperator
|
||||
shortNames:
|
||||
- clusteroperator
|
||||
scope: Cluster
|
||||
versions:
|
||||
- name: v1alpha1
|
||||
served: true
|
||||
storage: true
|
||||
schema:
|
||||
openAPIV3Schema:
|
||||
type: object
|
||||
properties:
|
||||
spec:
|
||||
type: object
|
||||
properties:
|
||||
resource:
|
||||
type: object
|
||||
properties:
|
||||
group:
|
||||
type: string
|
||||
version:
|
||||
type: string
|
||||
plural:
|
||||
type: string
|
||||
secret:
|
||||
type: object
|
||||
properties:
|
||||
name:
|
||||
type: string
|
||||
enabled:
|
||||
type: boolean
|
||||
structure:
|
||||
type: array
|
||||
items:
|
||||
type: object
|
||||
properties:
|
||||
key:
|
||||
type: string
|
||||
value:
|
||||
type: string
|
||||
services:
|
||||
type: array
|
||||
items:
|
||||
type: object
|
||||
x-kubernetes-preserve-unknown-fields: true
|
||||
deployments:
|
||||
type: array
|
||||
items:
|
||||
type: object
|
||||
x-kubernetes-preserve-unknown-fields: true
|
||||
statefulsets:
|
||||
type: array
|
||||
items:
|
||||
type: object
|
||||
x-kubernetes-preserve-unknown-fields: true
|
||||
configmaps:
|
||||
type: array
|
||||
items:
|
||||
type: object
|
||||
x-kubernetes-preserve-unknown-fields: true
|
||||
customresources:
|
||||
type: array
|
||||
items:
|
||||
type: object
|
||||
x-kubernetes-preserve-unknown-fields: true
|
||||
required: ["spec"]
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: meta-operator
|
||||
namespace: meta-operator
|
||||
labels:
|
||||
app.kubernetes.io/name: meta-operator
|
||||
spec:
|
||||
replicas: 1
|
||||
selector:
|
||||
matchLabels:
|
||||
app.kubernetes.io/name: meta-operator
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app.kubernetes.io/name: meta-operator
|
||||
spec:
|
||||
serviceAccountName: meta-operator
|
||||
containers:
|
||||
- name: meta-operator
|
||||
image: harbor.k-space.ee/k-space/meta-operator
|
||||
securityContext:
|
||||
readOnlyRootFilesystem: true
|
||||
runAsNonRoot: true
|
||||
runAsUser: 1000
|
||||
env:
|
||||
- name: MY_POD_NAMESPACE
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: metadata.namespace
|
||||
---
|
||||
apiVersion: codemowers.io/v1alpha1
|
||||
kind: ClusterOperator
|
||||
metadata:
|
||||
name: meta
|
||||
spec:
|
||||
resource:
|
||||
group: codemowers.io
|
||||
version: v1alpha1
|
||||
plural: clusteroperators
|
||||
secret:
|
||||
enabled: false
|
||||
deployments:
|
||||
- apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: foobar-operator
|
||||
labels:
|
||||
app.kubernetes.io/name: foobar-operator
|
||||
spec:
|
||||
replicas: 1
|
||||
selector:
|
||||
matchLabels:
|
||||
app.kubernetes.io/name: foobar-operator
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app.kubernetes.io/name: foobar-operator
|
||||
spec:
|
||||
serviceAccountName: meta-operator
|
||||
containers:
|
||||
- name: meta-operator
|
||||
image: harbor.k-space.ee/k-space/meta-operator
|
||||
command:
|
||||
- /meta-operator.py
|
||||
- --target
|
||||
- foobar
|
||||
securityContext:
|
||||
readOnlyRootFilesystem: true
|
||||
runAsNonRoot: true
|
||||
runAsUser: 1000
|
||||
env:
|
||||
- name: MY_POD_NAMESPACE
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: metadata.namespace
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRole
|
||||
metadata:
|
||||
name: meta-operator
|
||||
rules:
|
||||
- apiGroups:
|
||||
- ""
|
||||
resources:
|
||||
- secrets
|
||||
- configmaps
|
||||
- services
|
||||
verbs:
|
||||
- create
|
||||
- get
|
||||
- patch
|
||||
- update
|
||||
- delete
|
||||
- list
|
||||
- apiGroups:
|
||||
- apps
|
||||
resources:
|
||||
- deployments
|
||||
- statefulsets
|
||||
verbs:
|
||||
- create
|
||||
- delete
|
||||
- list
|
||||
- update
|
||||
- patch
|
||||
- apiGroups:
|
||||
- codemowers.io
|
||||
resources:
|
||||
- bindzones
|
||||
- clusteroperators
|
||||
- keydbs
|
||||
verbs:
|
||||
- get
|
||||
- list
|
||||
- watch
|
||||
- apiGroups:
|
||||
- k-space.ee
|
||||
resources:
|
||||
- cams
|
||||
verbs:
|
||||
- get
|
||||
- list
|
||||
- watch
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
name: meta-operator
|
||||
namespace: meta-operator
|
||||
---
|
||||
kind: ClusterRoleBinding
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
metadata:
|
||||
name: meta-operator
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: meta-operator
|
||||
namespace: meta-operator
|
||||
roleRef:
|
||||
kind: ClusterRole
|
||||
name: meta-operator
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
|
253
meta-operator/keydb.yml
Normal file
253
meta-operator/keydb.yml
Normal file
@ -0,0 +1,253 @@
|
||||
---
|
||||
apiVersion: apiextensions.k8s.io/v1
|
||||
kind: CustomResourceDefinition
|
||||
metadata:
|
||||
name: keydbs.codemowers.io
|
||||
spec:
|
||||
group: codemowers.io
|
||||
names:
|
||||
plural: keydbs
|
||||
singular: keydb
|
||||
kind: KeyDBCluster
|
||||
shortNames:
|
||||
- keydb
|
||||
scope: Namespaced
|
||||
versions:
|
||||
- name: v1alpha1
|
||||
served: true
|
||||
storage: true
|
||||
schema:
|
||||
openAPIV3Schema:
|
||||
type: object
|
||||
properties:
|
||||
spec:
|
||||
type: object
|
||||
properties:
|
||||
replicas:
|
||||
type: integer
|
||||
description: Replica count
|
||||
required: ["spec"]
|
||||
---
|
||||
apiVersion: codemowers.io/v1alpha1
|
||||
kind: ClusterOperator
|
||||
metadata:
|
||||
name: keydb
|
||||
spec:
|
||||
resource:
|
||||
group: codemowers.io
|
||||
version: v1alpha1
|
||||
plural: keydbs
|
||||
secret:
|
||||
enabled: true
|
||||
name: foobar-secrets
|
||||
structure:
|
||||
- key: REDIS_PASSWORD
|
||||
value: "%s"
|
||||
- key: REDIS_URI
|
||||
value: "redis://:%s@foobar"
|
||||
configmaps:
|
||||
- apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: foobar-scripts
|
||||
labels:
|
||||
app.kubernetes.io/name: foobar
|
||||
data:
|
||||
entrypoint.sh: |
|
||||
#!/bin/bash
|
||||
set -euxo pipefail
|
||||
host="$(hostname)"
|
||||
port="6379"
|
||||
replicas=()
|
||||
for node in {0..2}; do
|
||||
if [ "${host}" != "redis-${node}" ]; then
|
||||
replicas+=("--replicaof redis-${node}.redis-headless ${port}")
|
||||
fi
|
||||
done
|
||||
exec keydb-server /etc/keydb/redis.conf \
|
||||
--active-replica "yes" \
|
||||
--multi-master "yes" \
|
||||
--appendonly "no" \
|
||||
--bind "0.0.0.0" \
|
||||
--port "${port}" \
|
||||
--protected-mode "no" \
|
||||
--server-threads "2" \
|
||||
--masterauth "${REDIS_PASSWORD}" \
|
||||
--requirepass "${REDIS_PASSWORD}" \
|
||||
"${replicas[@]}"
|
||||
ping_readiness_local.sh: |-
|
||||
#!/bin/bash
|
||||
set -e
|
||||
[[ -n "${REDIS_PASSWORD}" ]] && export REDISCLI_AUTH="${REDIS_PASSWORD}"
|
||||
response="$(
|
||||
timeout -s 3 "${1}" \
|
||||
keydb-cli \
|
||||
-h localhost \
|
||||
-p 6379 \
|
||||
ping
|
||||
)"
|
||||
if [ "${response}" != "PONG" ]; then
|
||||
echo "${response}"
|
||||
exit 1
|
||||
fi
|
||||
ping_liveness_local.sh: |-
|
||||
#!/bin/bash
|
||||
set -e
|
||||
[[ -n "${REDIS_PASSWORD}" ]] && export REDISCLI_AUTH="${REDIS_PASSWORD}"
|
||||
response="$(
|
||||
timeout -s 3 "${1}" \
|
||||
keydb-cli \
|
||||
-h localhost \
|
||||
-p 6379 \
|
||||
ping
|
||||
)"
|
||||
if [ "${response}" != "PONG" ] && [[ ! "${response}" =~ ^.*LOADING.*$ ]]; then
|
||||
echo "${response}"
|
||||
exit 1
|
||||
fi
|
||||
cleanup_tempfiles.sh: |-
|
||||
#!/bin/bash
|
||||
set -e
|
||||
find /data/ -type f \( -name "temp-*.aof" -o -name "temp-*.rdb" \) -mmin +60 -delete
|
||||
services:
|
||||
- apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: foobar-headless
|
||||
labels:
|
||||
app.kubernetes.io/name: foobar
|
||||
spec:
|
||||
type: ClusterIP
|
||||
clusterIP: None
|
||||
ports:
|
||||
- name: redis
|
||||
port: 6379
|
||||
protocol: TCP
|
||||
targetPort: redis
|
||||
selector:
|
||||
app.kubernetes.io/name: foobar
|
||||
- apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: foobar
|
||||
labels:
|
||||
app.kubernetes.io/name: foobar
|
||||
annotations:
|
||||
{}
|
||||
spec:
|
||||
type: ClusterIP
|
||||
ports:
|
||||
- name: redis
|
||||
port: 6379
|
||||
protocol: TCP
|
||||
targetPort: redis
|
||||
- name: exporter
|
||||
port: 9121
|
||||
protocol: TCP
|
||||
targetPort: exporter
|
||||
selector:
|
||||
app.kubernetes.io/name: foobar
|
||||
sessionAffinity: ClientIP
|
||||
statefulsets:
|
||||
- apiVersion: apps/v1
|
||||
kind: StatefulSet
|
||||
metadata:
|
||||
name: foobar
|
||||
labels:
|
||||
app.kubernetes.io/name: foobar
|
||||
spec:
|
||||
replicas: 3
|
||||
serviceName: foobar-headless
|
||||
selector:
|
||||
matchLabels:
|
||||
app.kubernetes.io/name: foobar
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app.kubernetes.io/name: foobar
|
||||
spec:
|
||||
affinity:
|
||||
podAntiAffinity:
|
||||
preferredDuringSchedulingIgnoredDuringExecution:
|
||||
- podAffinityTerm:
|
||||
labelSelector:
|
||||
matchExpressions:
|
||||
- key: app.kubernetes.io/name
|
||||
operator: In
|
||||
values:
|
||||
- 'foobar'
|
||||
topologyKey: kubernetes.io/hostname
|
||||
weight: 100
|
||||
containers:
|
||||
- name: redis
|
||||
image: eqalpha/keydb:x86_64_v6.3.1
|
||||
imagePullPolicy: Always
|
||||
command:
|
||||
- /scripts/entrypoint.sh
|
||||
ports:
|
||||
- name: redis
|
||||
containerPort: 6379
|
||||
protocol: TCP
|
||||
livenessProbe:
|
||||
initialDelaySeconds: 20
|
||||
periodSeconds: 5
|
||||
# One second longer than command timeout should prevent generation of zombie processes.
|
||||
timeoutSeconds: 6
|
||||
successThreshold: 1
|
||||
failureThreshold: 5
|
||||
exec:
|
||||
command:
|
||||
- sh
|
||||
- -c
|
||||
- /scripts/ping_liveness_local.sh 5
|
||||
readinessProbe:
|
||||
initialDelaySeconds: 20
|
||||
periodSeconds: 5
|
||||
# One second longer than command timeout should prevent generation of zombie processes.
|
||||
timeoutSeconds: 2
|
||||
successThreshold: 1
|
||||
failureThreshold: 5
|
||||
exec:
|
||||
command:
|
||||
- sh
|
||||
- -c
|
||||
- /scripts/ping_readiness_local.sh 1
|
||||
startupProbe:
|
||||
periodSeconds: 5
|
||||
# One second longer than command timeout should prevent generation of zombie processes.
|
||||
timeoutSeconds: 2
|
||||
failureThreshold: 24
|
||||
exec:
|
||||
command:
|
||||
- sh
|
||||
- -c
|
||||
- /scripts/ping_readiness_local.sh 1
|
||||
resources:
|
||||
{}
|
||||
securityContext:
|
||||
{}
|
||||
volumeMounts:
|
||||
- name: foobar-scripts
|
||||
mountPath: /scripts
|
||||
- name: foobar-data
|
||||
mountPath: /data
|
||||
envFrom:
|
||||
- secretRef:
|
||||
name: foobar-secrets
|
||||
- name: exporter
|
||||
image: quay.io/oliver006/redis_exporter
|
||||
ports:
|
||||
- name: exporter
|
||||
containerPort: 9121
|
||||
envFrom:
|
||||
- secretRef:
|
||||
name: foobar-secrets
|
||||
securityContext:
|
||||
{}
|
||||
volumes:
|
||||
- name: foobar-scripts
|
||||
configMap:
|
||||
name: foobar-scripts
|
||||
defaultMode: 0755
|
||||
- name: foobar-data
|
||||
emptyDir: {}
|
@ -6,13 +6,11 @@ metadata:
|
||||
spec:
|
||||
podSelector: {}
|
||||
policyTypes:
|
||||
- Egress
|
||||
- Egress
|
||||
egress:
|
||||
- # TODO: Not sure why mysql-operator needs to be able to connect
|
||||
to:
|
||||
- namespaceSelector: {}
|
||||
ports:
|
||||
- protocol: TCP
|
||||
port: 33060
|
||||
- protocol: TCP
|
||||
port: 3306
|
||||
|
@ -559,10 +559,10 @@ metadata:
|
||||
name: mysql-operator
|
||||
namespace: mysql-operator
|
||||
labels:
|
||||
version: "8.0.30-2.0.6"
|
||||
version: "8.0.30-2.0.5"
|
||||
app.kubernetes.io/name: mysql-operator
|
||||
app.kubernetes.io/instance: mysql-operator
|
||||
app.kubernetes.io/version: "8.0.30-2.0.6"
|
||||
app.kubernetes.io/version: "8.0.30-2.0.5"
|
||||
app.kubernetes.io/component: controller
|
||||
app.kubernetes.io/managed-by: helm
|
||||
app.kubernetes.io/created-by: helm
|
||||
@ -578,7 +578,7 @@ spec:
|
||||
spec:
|
||||
containers:
|
||||
- name: mysql-operator
|
||||
image: mysql/mysql-operator:8.0.30-2.0.6
|
||||
image: mysql/mysql-operator:8.0.30-2.0.5
|
||||
imagePullPolicy: IfNotPresent
|
||||
args: ["mysqlsh", "--log-level=@INFO", "--pym", "mysqloperator", "operator"]
|
||||
env:
|
||||
|
@ -1,9 +0,0 @@
|
||||
# Nyancat server deployment
|
||||
|
||||
Something silly for a change.
|
||||
|
||||
To connect use:
|
||||
|
||||
```
|
||||
telnet nyancat.k-space.ee
|
||||
```
|
@ -1,49 +0,0 @@
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: nyancat
|
||||
namespace: nyancat
|
||||
spec:
|
||||
replicas: 2
|
||||
selector:
|
||||
matchLabels:
|
||||
app.kubernetes.io/name: nyancat
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app.kubernetes.io/name: nyancat
|
||||
spec:
|
||||
containers:
|
||||
- name: nyancat
|
||||
image: harbor.k-space.ee/k-space/nyancat-server:latest
|
||||
command:
|
||||
- onenetd
|
||||
- -v1
|
||||
- "0"
|
||||
- "2323"
|
||||
- nyancat
|
||||
- -I
|
||||
- --telnet
|
||||
securityContext:
|
||||
readOnlyRootFilesystem: true
|
||||
runAsNonRoot: true
|
||||
runAsUser: 65534
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: nyancat
|
||||
namespace: nyancat
|
||||
annotations:
|
||||
metallb.universe.tf/address-pool: eenet
|
||||
external-dns.alpha.kubernetes.io/hostname: nyancat.k-space.ee
|
||||
spec:
|
||||
type: LoadBalancer
|
||||
externalTrafficPolicy: Local
|
||||
selector:
|
||||
app.kubernetes.io/name: nyancat
|
||||
ports:
|
||||
- protocol: TCP
|
||||
port: 23
|
||||
targetPort: 2323
|
@ -1,11 +0,0 @@
|
||||
# Raw file based local PV-s
|
||||
|
||||
We currently only use `rawfile-localpv` portion of OpenEBS.
|
||||
|
||||
The manifests were rendered using Helm template from https://github.com/openebs/rawfile-localpv
|
||||
and subsequently modified
|
||||
|
||||
```
|
||||
kubectl create namespace openebs
|
||||
kubectl apply -n openebs -f rawfile.yaml
|
||||
```
|
@ -1,404 +0,0 @@
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
name: rawfile-csi-driver
|
||||
namespace: openebs
|
||||
---
|
||||
kind: ClusterRole
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
metadata:
|
||||
name: rawfile-csi-provisioner
|
||||
rules:
|
||||
- apiGroups: [""]
|
||||
resources: ["secrets"]
|
||||
verbs: ["get", "list"]
|
||||
- apiGroups: [""]
|
||||
resources: ["persistentvolumes"]
|
||||
verbs: ["get", "list", "watch", "create", "delete"]
|
||||
- apiGroups: [""]
|
||||
resources: ["persistentvolumeclaims"]
|
||||
verbs: ["get", "list", "watch", "update"]
|
||||
- apiGroups: ["storage.k8s.io"]
|
||||
resources: ["storageclasses"]
|
||||
verbs: ["get", "list", "watch"]
|
||||
- apiGroups: [""]
|
||||
resources: ["events"]
|
||||
verbs: ["list", "watch", "create", "update", "patch"]
|
||||
- apiGroups: ["snapshot.storage.k8s.io"]
|
||||
resources: ["volumesnapshots"]
|
||||
verbs: ["get", "list"]
|
||||
- apiGroups: ["snapshot.storage.k8s.io"]
|
||||
resources: ["volumesnapshotcontents"]
|
||||
verbs: ["get", "list"]
|
||||
- apiGroups: ["storage.k8s.io"]
|
||||
resources: ["csinodes"]
|
||||
verbs: ["get", "list", "watch"]
|
||||
- apiGroups: [""]
|
||||
resources: ["nodes"]
|
||||
verbs: ["get", "list", "watch"]
|
||||
- apiGroups: ["storage.k8s.io"]
|
||||
resources: ["volumeattachments"]
|
||||
verbs: ["get", "list", "watch"]
|
||||
- apiGroups: ["storage.k8s.io"]
|
||||
resources: ["csistoragecapacities"]
|
||||
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
|
||||
- apiGroups: [""]
|
||||
resources: ["pods"]
|
||||
verbs: ["get"]
|
||||
- apiGroups: ["apps"]
|
||||
resources: ["daemonsets"]
|
||||
verbs: ["get"]
|
||||
---
|
||||
kind: ClusterRole
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
metadata:
|
||||
name: rawfile-csi-broker
|
||||
rules:
|
||||
- apiGroups: [""]
|
||||
resources: ["persistentvolumes"]
|
||||
verbs: ["get"]
|
||||
- apiGroups: [""]
|
||||
resources: ["pods"]
|
||||
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
|
||||
---
|
||||
kind: ClusterRole
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
metadata:
|
||||
name: rawfile-csi-resizer
|
||||
rules:
|
||||
- apiGroups: [""]
|
||||
resources: ["secrets"]
|
||||
verbs: ["get", "list", "watch"]
|
||||
- apiGroups: [""]
|
||||
resources: ["persistentvolumes"]
|
||||
verbs: ["get", "list", "watch", "patch"]
|
||||
- apiGroups: [""]
|
||||
resources: ["persistentvolumeclaims"]
|
||||
verbs: ["get", "list", "watch"]
|
||||
- apiGroups: [""]
|
||||
resources: ["pods"]
|
||||
verbs: ["get", "list", "watch"]
|
||||
- apiGroups: [""]
|
||||
resources: ["persistentvolumeclaims/status"]
|
||||
verbs: ["patch"]
|
||||
- apiGroups: [""]
|
||||
resources: ["events"]
|
||||
verbs: ["list", "watch", "create", "update", "patch"]
|
||||
---
|
||||
kind: ClusterRoleBinding
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
metadata:
|
||||
name: rawfile-csi-provisioner
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: rawfile-csi-driver
|
||||
namespace: openebs
|
||||
roleRef:
|
||||
kind: ClusterRole
|
||||
name: rawfile-csi-provisioner
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
---
|
||||
kind: ClusterRoleBinding
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
metadata:
|
||||
name: rawfile-csi-broker
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: rawfile-csi-driver
|
||||
namespace: openebs
|
||||
roleRef:
|
||||
kind: ClusterRole
|
||||
name: rawfile-csi-broker
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
---
|
||||
kind: ClusterRoleBinding
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
metadata:
|
||||
name: rawfile-csi-resizer
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: rawfile-csi-driver
|
||||
namespace: openebs
|
||||
roleRef:
|
||||
kind: ClusterRole
|
||||
name: rawfile-csi-resizer
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: rawfile-csi-controller
|
||||
namespace: openebs
|
||||
labels:
|
||||
app.kubernetes.io/name: rawfile-csi
|
||||
component: controller
|
||||
spec:
|
||||
type: ClusterIP
|
||||
selector:
|
||||
app.kubernetes.io/name: rawfile-csi
|
||||
component: controller
|
||||
clusterIP: None
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: rawfile-csi-node
|
||||
namespace: openebs
|
||||
labels:
|
||||
app.kubernetes.io/name: rawfile-csi
|
||||
component: node
|
||||
spec:
|
||||
type: ClusterIP
|
||||
ports:
|
||||
- name: metrics
|
||||
port: 9100
|
||||
targetPort: metrics
|
||||
protocol: TCP
|
||||
selector:
|
||||
app.kubernetes.io/name: rawfile-csi
|
||||
component: node
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: DaemonSet
|
||||
metadata:
|
||||
name: rawfile-csi-node
|
||||
namespace: openebs
|
||||
spec:
|
||||
updateStrategy:
|
||||
rollingUpdate:
|
||||
maxUnavailable: "100%"
|
||||
selector:
|
||||
matchLabels: &selectorLabels
|
||||
app.kubernetes.io/name: rawfile-csi
|
||||
component: node
|
||||
template:
|
||||
metadata:
|
||||
labels: *selectorLabels
|
||||
spec:
|
||||
serviceAccount: rawfile-csi-driver
|
||||
priorityClassName: system-node-critical
|
||||
tolerations:
|
||||
- operator: "Exists"
|
||||
volumes:
|
||||
- name: registration-dir
|
||||
hostPath:
|
||||
path: /var/lib/kubelet/plugins_registry
|
||||
type: Directory
|
||||
- name: socket-dir
|
||||
hostPath:
|
||||
path: /var/lib/kubelet/plugins/rawfile-csi
|
||||
type: DirectoryOrCreate
|
||||
- name: mountpoint-dir
|
||||
hostPath:
|
||||
path: /var/lib/kubelet
|
||||
type: DirectoryOrCreate
|
||||
- name: data-dir
|
||||
hostPath:
|
||||
path: /var/csi/rawfile
|
||||
type: DirectoryOrCreate
|
||||
containers:
|
||||
- name: csi-driver
|
||||
image: "harbor.k-space.ee/k-space/rawfile-localpv:latest"
|
||||
imagePullPolicy: Always
|
||||
securityContext:
|
||||
privileged: true
|
||||
env:
|
||||
- name: PROVISIONER_NAME
|
||||
value: "rawfile.csi.openebs.io"
|
||||
- name: CSI_ENDPOINT
|
||||
value: unix:///csi/csi.sock
|
||||
- name: IMAGE_REPOSITORY
|
||||
value: "harbor.k-space.ee/k-space/rawfile-localpv"
|
||||
- name: IMAGE_TAG
|
||||
value: "latest"
|
||||
- name: NODE_ID
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
apiVersion: v1
|
||||
fieldPath: spec.nodeName
|
||||
ports:
|
||||
- name: metrics
|
||||
containerPort: 9100
|
||||
- name: csi-probe
|
||||
containerPort: 9808
|
||||
volumeMounts:
|
||||
- name: socket-dir
|
||||
mountPath: /csi
|
||||
- name: mountpoint-dir
|
||||
mountPath: /var/lib/kubelet
|
||||
mountPropagation: "Bidirectional"
|
||||
- name: data-dir
|
||||
mountPath: /data
|
||||
resources:
|
||||
limits:
|
||||
cpu: 1
|
||||
memory: 100Mi
|
||||
requests:
|
||||
cpu: 10m
|
||||
memory: 100Mi
|
||||
- name: node-driver-registrar
|
||||
image: k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0
|
||||
imagePullPolicy: IfNotPresent
|
||||
args:
|
||||
- --csi-address=$(ADDRESS)
|
||||
- --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)
|
||||
- --health-port=9809
|
||||
env:
|
||||
- name: ADDRESS
|
||||
value: /csi/csi.sock
|
||||
- name: DRIVER_REG_SOCK_PATH
|
||||
value: /var/lib/kubelet/plugins/rawfile-csi/csi.sock
|
||||
ports:
|
||||
- containerPort: 9809
|
||||
name: healthz
|
||||
livenessProbe:
|
||||
httpGet:
|
||||
path: /healthz
|
||||
port: healthz
|
||||
initialDelaySeconds: 5
|
||||
timeoutSeconds: 5
|
||||
volumeMounts:
|
||||
- name: socket-dir
|
||||
mountPath: /csi
|
||||
- name: registration-dir
|
||||
mountPath: /registration
|
||||
resources:
|
||||
limits:
|
||||
cpu: 500m
|
||||
memory: 100Mi
|
||||
requests:
|
||||
cpu: 10m
|
||||
memory: 100Mi
|
||||
- name: external-provisioner
|
||||
image: k8s.gcr.io/sig-storage/csi-provisioner:v2.2.2
|
||||
imagePullPolicy: IfNotPresent
|
||||
args:
|
||||
- "--csi-address=$(ADDRESS)"
|
||||
- "--feature-gates=Topology=true"
|
||||
- "--strict-topology"
|
||||
- "--immediate-topology=false"
|
||||
- "--timeout=120s"
|
||||
- "--enable-capacity=true"
|
||||
- "--capacity-ownerref-level=1" # DaemonSet
|
||||
- "--node-deployment=true"
|
||||
env:
|
||||
- name: ADDRESS
|
||||
value: /csi/csi.sock
|
||||
- name: NODE_NAME
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: spec.nodeName
|
||||
- name: NAMESPACE
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: metadata.namespace
|
||||
- name: POD_NAME
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: metadata.name
|
||||
volumeMounts:
|
||||
- name: socket-dir
|
||||
mountPath: /csi
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: StatefulSet
|
||||
metadata:
|
||||
name: rawfile-csi-controller
|
||||
namespace: openebs
|
||||
spec:
|
||||
replicas: 1
|
||||
serviceName: rawfile-csi
|
||||
selector:
|
||||
matchLabels: &selectorLabels
|
||||
app.kubernetes.io/name: rawfile-csi
|
||||
component: controller
|
||||
template:
|
||||
metadata:
|
||||
labels: *selectorLabels
|
||||
spec:
|
||||
serviceAccount: rawfile-csi-driver
|
||||
priorityClassName: system-cluster-critical
|
||||
tolerations:
|
||||
- key: "node-role.kubernetes.io/master"
|
||||
operator: Equal
|
||||
value: "true"
|
||||
effect: NoSchedule
|
||||
volumes:
|
||||
- name: socket-dir
|
||||
emptyDir: {}
|
||||
containers:
|
||||
- name: csi-driver
|
||||
image: "harbor.k-space.ee/k-space/rawfile-localpv"
|
||||
imagePullPolicy: Always
|
||||
args:
|
||||
- csi-driver
|
||||
- --disable-metrics
|
||||
env:
|
||||
- name: PROVISIONER_NAME
|
||||
value: "rawfile.csi.openebs.io"
|
||||
- name: CSI_ENDPOINT
|
||||
value: unix:///csi/csi.sock
|
||||
- name: IMAGE_REPOSITORY
|
||||
value: "harbor.k-space.ee/k-space/rawfile-localpv"
|
||||
- name: IMAGE_TAG
|
||||
value: "latest"
|
||||
volumeMounts:
|
||||
- name: socket-dir
|
||||
mountPath: /csi
|
||||
ports:
|
||||
- name: csi-probe
|
||||
containerPort: 9808
|
||||
resources:
|
||||
limits:
|
||||
cpu: 1
|
||||
memory: 100Mi
|
||||
requests:
|
||||
cpu: 10m
|
||||
memory: 100Mi
|
||||
- name: external-resizer
|
||||
image: k8s.gcr.io/sig-storage/csi-resizer:v1.4.0
|
||||
imagePullPolicy: IfNotPresent
|
||||
args:
|
||||
- "--csi-address=$(ADDRESS)"
|
||||
- "--handle-volume-inuse-error=false"
|
||||
env:
|
||||
- name: ADDRESS
|
||||
value: /csi/csi.sock
|
||||
volumeMounts:
|
||||
- name: socket-dir
|
||||
mountPath: /csi
|
||||
---
|
||||
apiVersion: storage.k8s.io/v1
|
||||
kind: CSIDriver
|
||||
metadata:
|
||||
name: rawfile.csi.openebs.io
|
||||
spec:
|
||||
attachRequired: false
|
||||
podInfoOnMount: true
|
||||
fsGroupPolicy: File
|
||||
storageCapacity: true
|
||||
volumeLifecycleModes:
|
||||
- Persistent
|
||||
---
|
||||
apiVersion: storage.k8s.io/v1
|
||||
kind: StorageClass
|
||||
metadata:
|
||||
name: rawfile-ext4
|
||||
provisioner: rawfile.csi.openebs.io
|
||||
reclaimPolicy: Retain
|
||||
volumeBindingMode: WaitForFirstConsumer
|
||||
allowVolumeExpansion: true
|
||||
parameters:
|
||||
fsType: "ext4"
|
||||
---
|
||||
apiVersion: storage.k8s.io/v1
|
||||
kind: StorageClass
|
||||
metadata:
|
||||
name: rawfile-xfs
|
||||
provisioner: rawfile.csi.openebs.io
|
||||
reclaimPolicy: Retain
|
||||
volumeBindingMode: WaitForFirstConsumer
|
||||
allowVolumeExpansion: true
|
||||
parameters:
|
||||
fsType: "xfs"
|
@ -26,9 +26,7 @@ spec:
|
||||
- name: PMA_ARBITRARY
|
||||
value: "1"
|
||||
- name: PMA_HOSTS
|
||||
value: mysql-cluster.authelia,mysql-cluster.etherpad,mariadb.authelia,mariadb.nextcloud,172.20.36.1
|
||||
- name: PMA_PORTS
|
||||
value: 6446,6446,3306,3306,3306
|
||||
value: mysql-cluster.etherpad.svc.cluster.local,mariadb.authelia,mariadb.nextcloud,172.20.36.1
|
||||
- name: PMA_ABSOLUTE_URI
|
||||
value: https://phpmyadmin.k-space.ee/
|
||||
- name: UPLOAD_LIMIT
|
||||
@ -40,6 +38,7 @@ metadata:
|
||||
name: phpmyadmin
|
||||
annotations:
|
||||
kubernetes.io/ingress.class: traefik
|
||||
cert-manager.io/cluster-issuer: default
|
||||
traefik.ingress.kubernetes.io/router.entrypoints: websecure
|
||||
traefik.ingress.kubernetes.io/router.middlewares: traefik-sso@kubernetescrd
|
||||
traefik.ingress.kubernetes.io/router.tls: "true"
|
||||
@ -58,7 +57,8 @@ spec:
|
||||
number: 80
|
||||
tls:
|
||||
- hosts:
|
||||
- "*.k-space.ee"
|
||||
- phpmyadmin.k-space.ee
|
||||
secretName: phpmyadmin-tls
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
@ -98,7 +98,7 @@ spec:
|
||||
to:
|
||||
- namespaceSelector: {}
|
||||
ports:
|
||||
- port: 6446
|
||||
- port: 3306
|
||||
- # Allow connecting to any MySQL instance outside the cluster
|
||||
to:
|
||||
- ipBlock:
|
||||
|
@ -1,10 +0,0 @@
|
||||
# Playground
|
||||
|
||||
Playground namespace is accessible to `Developers` AD group.
|
||||
|
||||
Novel log aggregator is being developer in this namespace:
|
||||
|
||||
```
|
||||
kubectl create secret generic -n playground mongodb-application-readwrite-password --from-literal="password=$(cat /dev/urandom | base64 | head -c 30)"
|
||||
kubectl create secret generic -n playground mongodb-application-readonly-password --from-literal="password=$(cat /dev/urandom | base64 | head -c 30)"
|
||||
kubectl apply -n playground -f logging.yml -f mongodb-support.yml -f mongoexpress.yml -f networkpolicy-base.yml
|
@ -1,263 +0,0 @@
|
||||
---
|
||||
apiVersion: mongodbcommunity.mongodb.com/v1
|
||||
kind: MongoDBCommunity
|
||||
metadata:
|
||||
name: mongodb
|
||||
spec:
|
||||
additionalMongodConfig:
|
||||
systemLog:
|
||||
quiet: true
|
||||
members: 3
|
||||
type: ReplicaSet
|
||||
version: "5.0.13"
|
||||
security:
|
||||
authentication:
|
||||
modes: ["SCRAM"]
|
||||
users:
|
||||
- name: readwrite
|
||||
db: application
|
||||
passwordSecretRef:
|
||||
name: mongodb-application-readwrite-password
|
||||
roles:
|
||||
- name: readWrite
|
||||
db: application
|
||||
scramCredentialsSecretName: mongodb-application-readwrite
|
||||
- name: readonly
|
||||
db: application
|
||||
passwordSecretRef:
|
||||
name: mongodb-application-readonly-password
|
||||
roles:
|
||||
- name: readOnly
|
||||
db: application
|
||||
scramCredentialsSecretName: mongodb-application-readonly
|
||||
statefulSet:
|
||||
spec:
|
||||
logLevel: WARN
|
||||
template:
|
||||
spec:
|
||||
containers:
|
||||
- name: mongod
|
||||
resources:
|
||||
requests:
|
||||
cpu: 100m
|
||||
memory: 2Gi
|
||||
limits:
|
||||
cpu: 2000m
|
||||
memory: 2Gi
|
||||
- name: mongodb-agent
|
||||
resources:
|
||||
requests:
|
||||
cpu: 1m
|
||||
memory: 100Mi
|
||||
limits: {}
|
||||
affinity:
|
||||
podAntiAffinity:
|
||||
requiredDuringSchedulingIgnoredDuringExecution:
|
||||
- labelSelector:
|
||||
matchExpressions:
|
||||
- key: app
|
||||
operator: In
|
||||
values:
|
||||
- mongodb-svc
|
||||
topologyKey: kubernetes.io/hostname
|
||||
nodeSelector:
|
||||
dedicated: monitoring
|
||||
tolerations:
|
||||
- key: dedicated
|
||||
operator: Equal
|
||||
value: monitoring
|
||||
effect: NoSchedule
|
||||
volumeClaimTemplates:
|
||||
- metadata:
|
||||
name: logs-volume
|
||||
spec:
|
||||
storageClassName: local-path
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
resources:
|
||||
requests:
|
||||
storage: 512Mi
|
||||
- metadata:
|
||||
name: data-volume
|
||||
spec:
|
||||
storageClassName: local-path
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
resources:
|
||||
requests:
|
||||
storage: 2Gi
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: DaemonSet
|
||||
metadata:
|
||||
name: log-shipper
|
||||
spec:
|
||||
updateStrategy:
|
||||
type: RollingUpdate
|
||||
rollingUpdate:
|
||||
maxUnavailable: 50%
|
||||
selector:
|
||||
matchLabels:
|
||||
app: log-shipper
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: log-shipper
|
||||
spec:
|
||||
serviceAccountName: log-shipper
|
||||
containers:
|
||||
- name: log-shipper
|
||||
image: harbor.k-space.ee/k-space/log-shipper
|
||||
securityContext:
|
||||
runAsUser: 0
|
||||
env:
|
||||
- name: MY_POD_NAMESPACE
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: metadata.namespace
|
||||
- name: NODE_NAME
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: spec.nodeName
|
||||
- name: MONGODB_HOST
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: mongodb-application-readwrite
|
||||
key: connectionString.standard
|
||||
ports:
|
||||
- containerPort: 8000
|
||||
name: metrics
|
||||
resources:
|
||||
limits:
|
||||
memory: 200Mi
|
||||
requests:
|
||||
cpu: 100m
|
||||
memory: 100Mi
|
||||
volumeMounts:
|
||||
- name: etcmachineid
|
||||
mountPath: /etc/machine-id
|
||||
readOnly: true
|
||||
- name: varlibdockercontainers
|
||||
mountPath: /var/lib/docker/containers
|
||||
readOnly: true
|
||||
- name: varlog
|
||||
mountPath: /var/log
|
||||
readOnly: true
|
||||
volumes:
|
||||
- name: etcmachineid
|
||||
hostPath:
|
||||
path: /etc/machine-id
|
||||
- name: varlibdockercontainers
|
||||
hostPath:
|
||||
path: /var/lib/docker/containers
|
||||
- name: varlog
|
||||
hostPath:
|
||||
path: /var/log
|
||||
tolerations:
|
||||
- operator: "Exists"
|
||||
effect: "NoExecute"
|
||||
- operator: "Exists"
|
||||
effect: "NoSchedule"
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRoleBinding
|
||||
metadata:
|
||||
name: logging-log-shipper
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: log-shipper
|
||||
namespace: playground
|
||||
roleRef:
|
||||
kind: ClusterRole
|
||||
name: filebeat
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
name: log-shipper
|
||||
labels:
|
||||
app: log-shipper
|
||||
---
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: NetworkPolicy
|
||||
metadata:
|
||||
name: log-shipper
|
||||
spec:
|
||||
podSelector:
|
||||
matchLabels:
|
||||
app: log-shipper
|
||||
policyTypes:
|
||||
- Ingress
|
||||
- Egress
|
||||
ingress:
|
||||
- from:
|
||||
- namespaceSelector:
|
||||
matchLabels:
|
||||
kubernetes.io/metadata.name: prometheus-operator
|
||||
podSelector:
|
||||
matchLabels:
|
||||
app.kubernetes.io/name: prometheus
|
||||
egress:
|
||||
- to:
|
||||
- podSelector:
|
||||
matchLabels:
|
||||
app: mongodb-svc
|
||||
ports:
|
||||
- port: 27017
|
||||
---
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: NetworkPolicy
|
||||
metadata:
|
||||
name: log-viewer-backend
|
||||
spec:
|
||||
podSelector:
|
||||
matchLabels:
|
||||
app: log-viewer-backend
|
||||
policyTypes:
|
||||
- Ingress
|
||||
- Egress
|
||||
egress:
|
||||
- to:
|
||||
- podSelector:
|
||||
matchLabels:
|
||||
app: mongodb-svc
|
||||
ingress:
|
||||
- from:
|
||||
- namespaceSelector:
|
||||
matchLabels:
|
||||
kubernetes.io/metadata.name: traefik
|
||||
podSelector:
|
||||
matchLabels:
|
||||
app.kubernetes.io/name: traefik
|
||||
---
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: NetworkPolicy
|
||||
metadata:
|
||||
name: log-viewer-frontend
|
||||
spec:
|
||||
podSelector:
|
||||
matchLabels:
|
||||
app: log-viewer-frontend
|
||||
policyTypes:
|
||||
- Ingress
|
||||
- Egress
|
||||
ingress:
|
||||
- from:
|
||||
- namespaceSelector:
|
||||
matchLabels:
|
||||
kubernetes.io/metadata.name: traefik
|
||||
podSelector:
|
||||
matchLabels:
|
||||
app.kubernetes.io/name: traefik
|
||||
---
|
||||
apiVersion: monitoring.coreos.com/v1
|
||||
kind: PodMonitor
|
||||
metadata:
|
||||
name: log-shipper
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
app: log-shipper
|
||||
podMetricsEndpoints:
|
||||
- port: metrics
|
@ -1 +0,0 @@
|
||||
../mongodb-operator/mongodb-support.yml
|
@ -1 +0,0 @@
|
||||
../shared/mongoexpress.yml
|
@ -1 +0,0 @@
|
||||
../shared/networkpolicy-base.yml
|
1
prometheus-operator/.gitignore
vendored
1
prometheus-operator/.gitignore
vendored
@ -1 +0,0 @@
|
||||
bundle.yml
|
@ -1,7 +1,7 @@
|
||||
# Prometheus operator
|
||||
|
||||
```
|
||||
curl -L https://github.com/prometheus-operator/prometheus-operator/releases/download/v0.61.1/bundle.yaml | sed -e 's/namespace: default/namespace: prometheus-operator/g' > bundle.yml
|
||||
curl -L https://github.com/prometheus-operator/prometheus-operator/releases/download/v0.59.0/bundle.yaml | sed -e 's/namespace: default/namespace: prometheus-operator/g' > bundle.yml
|
||||
kubectl create namespace prometheus-operator
|
||||
kubectl apply --server-side -n prometheus-operator -f bundle.yml
|
||||
kubectl delete -n prometheus-operator configmap snmp-exporter
|
||||
@ -9,16 +9,7 @@ kubectl create -n prometheus-operator configmap snmp-exporter --from-file=snmp.y
|
||||
kubectl apply -n prometheus-operator -f application.yml -f node-exporter.yml -f blackbox-exporter.yml -f snmp-exporter.yml -f mikrotik-exporter.yml
|
||||
```
|
||||
|
||||
|
||||
# Slack
|
||||
|
||||
```
|
||||
kubectl create -n prometheus-operator secret generic slack-secrets \
|
||||
--from-literal=webhook-url=https://hooks.slack.com/services/...
|
||||
```
|
||||
|
||||
|
||||
# Mikrotik exporter
|
||||
# Mikrotik expoeter
|
||||
|
||||
```
|
||||
kubectl create -n prometheus-operator secret generic mikrotik-exporter \
|
||||
|
80
prometheus-operator/alertmanager-config.yml
Normal file
80
prometheus-operator/alertmanager-config.yml
Normal file
@ -0,0 +1,80 @@
|
||||
apiVersion: monitoring.coreos.com/v1alpha1
|
||||
kind: AlertmanagerConfig
|
||||
metadata:
|
||||
name: email
|
||||
labels:
|
||||
alertmanagerConfig: email
|
||||
spec:
|
||||
route:
|
||||
group_by: [email]
|
||||
receiver: test_router
|
||||
# When the first notification was sent, wait 'group_interval' to send a batch
|
||||
# of new alerts that started firing for that group.
|
||||
group_interval: 1s
|
||||
# If an alert has successfully been sent, wait 'repeat_interval' to
|
||||
# resend them.
|
||||
repeat_interval: 24h
|
||||
routes:
|
||||
- match:
|
||||
severity: critical
|
||||
group_wait: 1s
|
||||
- match:
|
||||
severity: error
|
||||
group_wait: 30m
|
||||
- match:
|
||||
severity: warning
|
||||
group_wait: 4h
|
||||
- match:
|
||||
severity: info
|
||||
group_wait: 24h
|
||||
receivers:
|
||||
- name: email_router
|
||||
email_configs:
|
||||
# - to: "{{ .GroupLabels.email }}"
|
||||
- to: 'Lauri Võsandi <lauri@k-space.ee>'
|
||||
from: 'Alerting <alerting@k-space.ee>'
|
||||
smarthost: mail.k-space.ee:465
|
||||
require_tls: false
|
||||
auth_username: 'alerting'
|
||||
auth_password: '5A8m0Y9yC4NcFXztmwMb'
|
||||
headers:
|
||||
subject: "You have {{ .Alerts.Firing | len }} firing alerts at {{ .CommonLabels.severity }} level"
|
||||
html: |
|
||||
Hi {{ .GroupLabels.email }},
|
||||
<p>
|
||||
You have the following firing alerts:
|
||||
<ul>
|
||||
{{ range .Alerts }}
|
||||
<li>{{.Labels.alertname}} on {{ .Labels.instance }}</li>
|
||||
{{ end }}
|
||||
</ul>
|
||||
</p>
|
||||
For more info see <a href="https://prom.k-space.ee/alerts">Prometheus alerts page</a>.
|
||||
<br><br>
|
||||
To silence alerts visit <a href="https://am.k-space.ee">Alert manager page</a>.
|
||||
- name: test_router
|
||||
email_configs:
|
||||
# - to: "{{ .GroupLabels.email }}"
|
||||
- to: 'Song Meo <songmeo@k-space.ee>'
|
||||
from: 'Alerting <alerting@k-space.ee>'
|
||||
smarthost: mail.k-space.ee:465
|
||||
require_tls: false
|
||||
auth_username: 'alerting'
|
||||
auth_password:
|
||||
name: email-config
|
||||
key: auth_password
|
||||
headers:
|
||||
subject: "[test] You have {{ .Alerts.Firing | len }} firing alerts at {{ .CommonLabels.severity }} level"
|
||||
html: |
|
||||
Hi,
|
||||
<p>
|
||||
You have the following firing alerts:
|
||||
<ul>
|
||||
{{ range .Alerts }}
|
||||
<li>{{.Labels.alertname}} on {{ .Labels.instance }}</li>
|
||||
{{ end }}
|
||||
</ul>
|
||||
</p>
|
||||
For more info see <a href="https://prom.k-space.ee/alerts">Prometheus alerts page</a>.
|
||||
<br><br>
|
||||
To silence alerts visit <a href="https://am.k-space.ee">Alert manager page</a>.
|
@ -1,29 +1,4 @@
|
||||
---
|
||||
apiVersion: monitoring.coreos.com/v1alpha1
|
||||
kind: AlertmanagerConfig
|
||||
metadata:
|
||||
name: alertmanager
|
||||
labels:
|
||||
app.kubernetes.io/name: alertmanager
|
||||
spec:
|
||||
route:
|
||||
routes:
|
||||
- continue: false
|
||||
receiver: slack-notifications
|
||||
matchers:
|
||||
- matchType: "="
|
||||
name: severity
|
||||
value: critical
|
||||
receiver: 'null'
|
||||
receivers:
|
||||
- name: 'slack-notifications'
|
||||
slackConfigs:
|
||||
- channel: '#kube-prod'
|
||||
sendResolved: true
|
||||
apiURL:
|
||||
name: slack-secrets
|
||||
key: webhook-url
|
||||
---
|
||||
apiVersion: monitoring.coreos.com/v1
|
||||
kind: PodMonitor
|
||||
metadata:
|
||||
@ -40,14 +15,9 @@ kind: Alertmanager
|
||||
metadata:
|
||||
name: alertmanager
|
||||
spec:
|
||||
alertmanagerConfigMatcherStrategy:
|
||||
type: None
|
||||
alertmanagerConfigNamespaceSelector: {}
|
||||
alertmanagerConfigSelector: {}
|
||||
alertmanagerConfiguration:
|
||||
name: alertmanager
|
||||
secrets:
|
||||
- slack-secrets
|
||||
alertmanagerConfigSelector:
|
||||
matchLabels:
|
||||
alertmanagerConfig: email
|
||||
nodeSelector:
|
||||
dedicated: monitoring
|
||||
tolerations:
|
||||
@ -85,8 +55,10 @@ spec:
|
||||
alerting:
|
||||
alertmanagers:
|
||||
- namespace: prometheus-operator
|
||||
name: alertmanager-operated
|
||||
port: web
|
||||
name: alertmanager
|
||||
port: http
|
||||
pathPrefix: "/"
|
||||
apiVersion: v2
|
||||
externalUrl: "http://prom.k-space.ee/"
|
||||
replicas: 2
|
||||
shards: 1
|
||||
@ -104,7 +76,7 @@ spec:
|
||||
probeSelector: {}
|
||||
ruleNamespaceSelector: {}
|
||||
ruleSelector: {}
|
||||
retentionSize: 8GB
|
||||
retentionSize: 80GB
|
||||
storage:
|
||||
volumeClaimTemplate:
|
||||
spec:
|
||||
@ -112,7 +84,7 @@ spec:
|
||||
- ReadWriteOnce
|
||||
resources:
|
||||
requests:
|
||||
storage: 10Gi
|
||||
storage: 100Gi
|
||||
storageClassName: local-path
|
||||
---
|
||||
apiVersion: v1
|
||||
@ -409,6 +381,7 @@ kind: Ingress
|
||||
metadata:
|
||||
name: prometheus
|
||||
annotations:
|
||||
cert-manager.io/cluster-issuer: default
|
||||
traefik.ingress.kubernetes.io/router.entrypoints: websecure
|
||||
traefik.ingress.kubernetes.io/router.tls: "true"
|
||||
external-dns.alpha.kubernetes.io/target: traefik.k-space.ee
|
||||
@ -427,13 +400,15 @@ spec:
|
||||
number: 9090
|
||||
tls:
|
||||
- hosts:
|
||||
- "*.k-space.ee"
|
||||
- prom.k-space.ee
|
||||
secretName: prom-tls
|
||||
---
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: alertmanager
|
||||
annotations:
|
||||
cert-manager.io/cluster-issuer: default
|
||||
traefik.ingress.kubernetes.io/router.entrypoints: websecure
|
||||
traefik.ingress.kubernetes.io/router.tls: "true"
|
||||
external-dns.alpha.kubernetes.io/target: traefik.k-space.ee
|
||||
@ -452,7 +427,8 @@ spec:
|
||||
number: 9093
|
||||
tls:
|
||||
- hosts:
|
||||
- "*.k-space.ee"
|
||||
- am.k-space.ee
|
||||
secretName: alertmanager-tls
|
||||
---
|
||||
apiVersion: monitoring.coreos.com/v1
|
||||
kind: PodMonitor
|
||||
@ -514,3 +490,276 @@ spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
app.kubernetes.io/name: kubelet
|
||||
---
|
||||
apiVersion: monitoring.coreos.com/v1
|
||||
kind: PrometheusRule
|
||||
metadata:
|
||||
name: kube-state-metrics
|
||||
spec:
|
||||
groups:
|
||||
- name: kube-state-metrics
|
||||
rules:
|
||||
- alert: KubernetesNodeReady
|
||||
expr: kube_node_status_condition{condition="Ready",status="true"} == 0
|
||||
for: 10m
|
||||
labels:
|
||||
severity: critical
|
||||
annotations:
|
||||
summary: Kubernetes Node ready (instance {{ $labels.instance }})
|
||||
description: "Node {{ $labels.node }} has been unready for a long time\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
|
||||
- alert: KubernetesMemoryPressure
|
||||
expr: kube_node_status_condition{condition="MemoryPressure",status="true"} == 1
|
||||
for: 2m
|
||||
labels:
|
||||
severity: critical
|
||||
annotations:
|
||||
summary: Kubernetes memory pressure (instance {{ $labels.instance }})
|
||||
description: "{{ $labels.node }} has MemoryPressure condition\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
|
||||
- alert: KubernetesDiskPressure
|
||||
expr: kube_node_status_condition{condition="DiskPressure",status="true"} == 1
|
||||
for: 2m
|
||||
labels:
|
||||
severity: critical
|
||||
annotations:
|
||||
summary: Kubernetes disk pressure (instance {{ $labels.instance }})
|
||||
description: "{{ $labels.node }} has DiskPressure condition\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
|
||||
- alert: KubernetesOutOfDisk
|
||||
expr: kube_node_status_condition{condition="OutOfDisk",status="true"} == 1
|
||||
for: 2m
|
||||
labels:
|
||||
severity: critical
|
||||
annotations:
|
||||
summary: Kubernetes out of disk (instance {{ $labels.instance }})
|
||||
description: "{{ $labels.node }} has OutOfDisk condition\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
|
||||
- alert: KubernetesOutOfCapacity
|
||||
expr: sum by (node) ((kube_pod_status_phase{phase="Running"} == 1) + on(uid) group_left(node) (0 * kube_pod_info{pod_template_hash=""})) / sum by (node) (kube_node_status_allocatable{resource="pods"}) * 100 > 90
|
||||
for: 2m
|
||||
labels:
|
||||
severity: warning
|
||||
annotations:
|
||||
summary: Kubernetes out of capacity (instance {{ $labels.instance }})
|
||||
description: "{{ $labels.node }} is out of capacity\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
|
||||
- alert: KubernetesContainerOomKiller
|
||||
expr: (kube_pod_container_status_restarts_total - kube_pod_container_status_restarts_total offset 10m >= 1) and ignoring (reason) min_over_time(kube_pod_container_status_last_terminated_reason{reason="OOMKilled"}[10m]) == 1
|
||||
for: 0m
|
||||
labels:
|
||||
severity: warning
|
||||
annotations:
|
||||
summary: Kubernetes container oom killer (instance {{ $labels.instance }})
|
||||
description: "Container {{ $labels.container }} in pod {{ $labels.namespace }}/{{ $labels.pod }} has been OOMKilled {{ $value }} times in the last 10 minutes.\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
|
||||
- alert: KubernetesJobFailed
|
||||
expr: kube_job_status_failed > 0
|
||||
for: 0m
|
||||
labels:
|
||||
severity: warning
|
||||
annotations:
|
||||
summary: Kubernetes Job failed (instance {{ $labels.instance }})
|
||||
description: "Job {{$labels.namespace}}/{{$labels.exported_job}} failed to complete\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
|
||||
- alert: KubernetesCronjobSuspended
|
||||
expr: kube_cronjob_spec_suspend != 0
|
||||
for: 0m
|
||||
labels:
|
||||
severity: warning
|
||||
annotations:
|
||||
summary: Kubernetes CronJob suspended (instance {{ $labels.instance }})
|
||||
description: "CronJob {{ $labels.namespace }}/{{ $labels.cronjob }} is suspended\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
|
||||
- alert: KubernetesPersistentvolumeclaimPending
|
||||
expr: kube_persistentvolumeclaim_status_phase{phase="Pending"} == 1
|
||||
for: 2m
|
||||
labels:
|
||||
severity: warning
|
||||
annotations:
|
||||
summary: Kubernetes PersistentVolumeClaim pending (instance {{ $labels.instance }})
|
||||
description: "PersistentVolumeClaim {{ $labels.namespace }}/{{ $labels.persistentvolumeclaim }} is pending\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
|
||||
- alert: KubernetesVolumeOutOfDiskSpace
|
||||
expr: kubelet_volume_stats_available_bytes / kubelet_volume_stats_capacity_bytes * 100 < 10
|
||||
for: 2m
|
||||
labels:
|
||||
severity: warning
|
||||
annotations:
|
||||
summary: Kubernetes Volume out of disk space (instance {{ $labels.instance }})
|
||||
description: "Volume is almost full (< 10% left)\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
|
||||
- alert: KubernetesVolumeFullInFourDays
|
||||
expr: predict_linear(kubelet_volume_stats_available_bytes[6h], 4 * 24 * 3600) < 0
|
||||
for: 0m
|
||||
labels:
|
||||
severity: critical
|
||||
annotations:
|
||||
summary: Kubernetes Volume full in four days (instance {{ $labels.instance }})
|
||||
description: "{{ $labels.namespace }}/{{ $labels.persistentvolumeclaim }} is expected to fill up within four days. Currently {{ $value | humanize }}% is available.\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
|
||||
- alert: KubernetesPersistentvolumeError
|
||||
expr: kube_persistentvolume_status_phase{phase=~"Failed|Pending", job="kube-state-metrics"} > 0
|
||||
for: 0m
|
||||
labels:
|
||||
severity: critical
|
||||
annotations:
|
||||
summary: Kubernetes PersistentVolume error (instance {{ $labels.instance }})
|
||||
description: "Persistent volume is in bad state\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
|
||||
- alert: KubernetesStatefulsetDown
|
||||
expr: (kube_statefulset_status_replicas_ready / kube_statefulset_status_replicas_current) != 1
|
||||
for: 1m
|
||||
labels:
|
||||
severity: critical
|
||||
annotations:
|
||||
summary: Kubernetes StatefulSet down (instance {{ $labels.instance }})
|
||||
description: "A StatefulSet went down\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
|
||||
- alert: KubernetesHpaScalingAbility
|
||||
expr: kube_horizontalpodautoscaler_status_condition{status="false", condition="AbleToScale"} == 1
|
||||
for: 2m
|
||||
labels:
|
||||
severity: warning
|
||||
annotations:
|
||||
summary: Kubernetes HPA scaling ability (instance {{ $labels.instance }})
|
||||
description: "Pod is unable to scale\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
|
||||
- alert: KubernetesHpaMetricAvailability
|
||||
expr: kube_horizontalpodautoscaler_status_condition{status="false", condition="ScalingActive"} == 1
|
||||
for: 0m
|
||||
labels:
|
||||
severity: warning
|
||||
annotations:
|
||||
summary: Kubernetes HPA metric availability (instance {{ $labels.instance }})
|
||||
description: "HPA is not able to collect metrics\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
|
||||
- alert: KubernetesHpaScaleCapability
|
||||
expr: kube_horizontalpodautoscaler_status_desired_replicas >= kube_horizontalpodautoscaler_spec_max_replicas
|
||||
for: 2m
|
||||
labels:
|
||||
severity: info
|
||||
annotations:
|
||||
summary: Kubernetes HPA scale capability (instance {{ $labels.instance }})
|
||||
description: "The maximum number of desired Pods has been hit\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
|
||||
- alert: KubernetesPodNotHealthy
|
||||
expr: min_over_time(sum by (namespace, pod) (kube_pod_status_phase{phase=~"Pending|Unknown|Failed"})[15m:1m]) > 0
|
||||
for: 0m
|
||||
labels:
|
||||
severity: critical
|
||||
annotations:
|
||||
summary: Kubernetes Pod not healthy (instance {{ $labels.instance }})
|
||||
description: "Pod has been in a non-ready state for longer than 15 minutes.\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
|
||||
- alert: KubernetesPodCrashLooping
|
||||
expr: increase(kube_pod_container_status_restarts_total[1m]) > 3
|
||||
for: 2m
|
||||
labels:
|
||||
severity: warning
|
||||
annotations:
|
||||
summary: Kubernetes pod crash looping (instance {{ $labels.instance }})
|
||||
description: "Pod {{ $labels.pod }} is crash looping\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
|
||||
- alert: KubernetesReplicassetMismatch
|
||||
expr: kube_replicaset_spec_replicas != kube_replicaset_status_ready_replicas
|
||||
for: 10m
|
||||
labels:
|
||||
severity: warning
|
||||
annotations:
|
||||
summary: Kubernetes ReplicasSet mismatch (instance {{ $labels.instance }})
|
||||
description: "Deployment Replicas mismatch\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
|
||||
- alert: KubernetesDeploymentReplicasMismatch
|
||||
expr: kube_deployment_spec_replicas != kube_deployment_status_replicas_available
|
||||
for: 10m
|
||||
labels:
|
||||
severity: warning
|
||||
annotations:
|
||||
summary: Kubernetes Deployment replicas mismatch (instance {{ $labels.instance }})
|
||||
description: "Deployment Replicas mismatch\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
|
||||
- alert: KubernetesStatefulsetReplicasMismatch
|
||||
expr: kube_statefulset_status_replicas_ready != kube_statefulset_status_replicas
|
||||
for: 10m
|
||||
labels:
|
||||
severity: warning
|
||||
annotations:
|
||||
summary: Kubernetes StatefulSet replicas mismatch (instance {{ $labels.instance }})
|
||||
description: "A StatefulSet does not match the expected number of replicas.\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
|
||||
- alert: KubernetesDeploymentGenerationMismatch
|
||||
expr: kube_deployment_status_observed_generation != kube_deployment_metadata_generation
|
||||
for: 10m
|
||||
labels:
|
||||
severity: critical
|
||||
annotations:
|
||||
summary: Kubernetes Deployment generation mismatch (instance {{ $labels.instance }})
|
||||
description: "A Deployment has failed but has not been rolled back.\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
|
||||
- alert: KubernetesStatefulsetGenerationMismatch
|
||||
expr: kube_statefulset_status_observed_generation != kube_statefulset_metadata_generation
|
||||
for: 10m
|
||||
labels:
|
||||
severity: critical
|
||||
annotations:
|
||||
summary: Kubernetes StatefulSet generation mismatch (instance {{ $labels.instance }})
|
||||
description: "A StatefulSet has failed but has not been rolled back.\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
|
||||
- alert: KubernetesStatefulsetUpdateNotRolledOut
|
||||
expr: max without (revision) (kube_statefulset_status_current_revision unless kube_statefulset_status_update_revision) * (kube_statefulset_replicas != kube_statefulset_status_replicas_updated)
|
||||
for: 10m
|
||||
labels:
|
||||
severity: warning
|
||||
annotations:
|
||||
summary: Kubernetes StatefulSet update not rolled out (instance {{ $labels.instance }})
|
||||
description: "StatefulSet update has not been rolled out.\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
|
||||
- alert: KubernetesDaemonsetRolloutStuck
|
||||
expr: kube_daemonset_status_number_ready / kube_daemonset_status_desired_number_scheduled * 100 < 100 or kube_daemonset_status_desired_number_scheduled - kube_daemonset_status_current_number_scheduled > 0
|
||||
for: 10m
|
||||
labels:
|
||||
severity: warning
|
||||
annotations:
|
||||
summary: Kubernetes DaemonSet rollout stuck (instance {{ $labels.instance }})
|
||||
description: "Some Pods of DaemonSet are not scheduled or not ready\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
|
||||
- alert: KubernetesDaemonsetMisscheduled
|
||||
expr: kube_daemonset_status_number_misscheduled > 0
|
||||
for: 1m
|
||||
labels:
|
||||
severity: critical
|
||||
annotations:
|
||||
summary: Kubernetes DaemonSet misscheduled (instance {{ $labels.instance }})
|
||||
description: "Some DaemonSet Pods are running where they are not supposed to run\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
|
||||
- alert: KubernetesCronjobTooLong
|
||||
expr: time() - kube_cronjob_next_schedule_time > 3600
|
||||
for: 0m
|
||||
labels:
|
||||
severity: warning
|
||||
annotations:
|
||||
summary: Kubernetes CronJob too long (instance {{ $labels.instance }})
|
||||
description: "CronJob {{ $labels.namespace }}/{{ $labels.cronjob }} is taking more than 1h to complete.\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
|
||||
- alert: KubernetesJobSlowCompletion
|
||||
expr: kube_job_spec_completions - kube_job_status_succeeded > 0
|
||||
for: 12h
|
||||
labels:
|
||||
severity: critical
|
||||
annotations:
|
||||
summary: Kubernetes job slow completion (instance {{ $labels.instance }})
|
||||
description: "Kubernetes Job {{ $labels.namespace }}/{{ $labels.job_name }} did not complete in time.\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
|
||||
- alert: KubernetesApiServerErrors
|
||||
expr: sum(rate(apiserver_request_total{job="apiserver",code=~"^(?:5..)$"}[1m])) / sum(rate(apiserver_request_total{job="apiserver"}[1m])) * 100 > 3
|
||||
for: 2m
|
||||
labels:
|
||||
severity: critical
|
||||
annotations:
|
||||
summary: Kubernetes API server errors (instance {{ $labels.instance }})
|
||||
description: "Kubernetes API server is experiencing high error rate\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
|
||||
- alert: KubernetesApiClientErrors
|
||||
expr: (sum(rate(rest_client_requests_total{code=~"(4|5).."}[1m])) by (instance, job) / sum(rate(rest_client_requests_total[1m])) by (instance, job)) * 100 > 1
|
||||
for: 2m
|
||||
labels:
|
||||
severity: critical
|
||||
annotations:
|
||||
summary: Kubernetes API client errors (instance {{ $labels.instance }})
|
||||
description: "Kubernetes API client is experiencing high error rate\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
|
||||
- alert: KubernetesClientCertificateExpiresNextWeek
|
||||
expr: apiserver_client_certificate_expiration_seconds_count{job="apiserver"} > 0 and histogram_quantile(0.01, sum by (job, le) (rate(apiserver_client_certificate_expiration_seconds_bucket{job="apiserver"}[5m]))) < 7*24*60*60
|
||||
for: 0m
|
||||
labels:
|
||||
severity: warning
|
||||
annotations:
|
||||
summary: Kubernetes client certificate expires next week (instance {{ $labels.instance }})
|
||||
description: "A client certificate used to authenticate to the apiserver is expiring next week.\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
|
||||
- alert: KubernetesClientCertificateExpiresSoon
|
||||
expr: apiserver_client_certificate_expiration_seconds_count{job="apiserver"} > 0 and histogram_quantile(0.01, sum by (job, le) (rate(apiserver_client_certificate_expiration_seconds_bucket{job="apiserver"}[5m]))) < 24*60*60
|
||||
for: 0m
|
||||
labels:
|
||||
severity: critical
|
||||
annotations:
|
||||
summary: Kubernetes client certificate expires soon (instance {{ $labels.instance }})
|
||||
description: "A client certificate used to authenticate to the apiserver is expiring in less than 24.0 hours.\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
|
||||
- alert: KubernetesApiServerLatency
|
||||
expr: histogram_quantile(0.99, sum(rate(apiserver_request_latencies_bucket{subresource!="log",verb!~"^(?:CONNECT|WATCHLIST|WATCH|PROXY)$"} [10m])) WITHOUT (instance, resource)) / 1e+06 > 1
|
||||
for: 2m
|
||||
labels:
|
||||
severity: warning
|
||||
annotations:
|
||||
summary: Kubernetes API server latency (instance {{ $labels.instance }})
|
||||
description: "Kubernetes API server has a 99th percentile latency of {{ $value }} seconds for {{ $labels.verb }} {{ $labels.resource }}.\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
|
||||
|
@ -156,7 +156,7 @@ metadata:
|
||||
name: blackbox-exporter
|
||||
spec:
|
||||
revisionHistoryLimit: 0
|
||||
replicas: 3
|
||||
replicas: 2
|
||||
selector:
|
||||
matchLabels:
|
||||
app: blackbox-exporter
|
||||
|
28816
prometheus-operator/bundle.yml
Normal file
28816
prometheus-operator/bundle.yml
Normal file
File diff suppressed because it is too large
Load Diff
@ -87,13 +87,7 @@ spec:
|
||||
affinity:
|
||||
podAntiAffinity:
|
||||
requiredDuringSchedulingIgnoredDuringExecution:
|
||||
- labelSelector:
|
||||
matchExpressions:
|
||||
- key: app
|
||||
operator: In
|
||||
values:
|
||||
- mikrotik-exporter
|
||||
topologyKey: "kubernetes.io/hostname"
|
||||
- topologyKey: "kubernetes.io/hostname"
|
||||
---
|
||||
kind: Service
|
||||
apiVersion: v1
|
||||
|
@ -4,13 +4,11 @@ kind: Probe
|
||||
metadata:
|
||||
name: nodes-proxmox
|
||||
spec:
|
||||
scrapeTimeout: 30s
|
||||
targets:
|
||||
staticConfig:
|
||||
static:
|
||||
- nas.mgmt.k-space.ee:9100
|
||||
- pve1.proxmox.infra.k-space.ee:9100
|
||||
- pve2.proxmox.infra.k-space.ee:9100
|
||||
- pve8.proxmox.infra.k-space.ee:9100
|
||||
- pve9.proxmox.infra.k-space.ee:9100
|
||||
relabelingConfigs:
|
||||
@ -88,37 +86,37 @@ spec:
|
||||
summary: Host memory under memory pressure (instance {{ $labels.instance }})
|
||||
description: The node is under heavy memory pressure. High rate of major page faults
|
||||
- alert: HostUnusualNetworkThroughputIn
|
||||
expr: sum by (instance) (rate(node_network_receive_bytes_total[2m])) > 800e+06
|
||||
expr: sum by (instance) (rate(node_network_receive_bytes_total[2m])) > 160e+06
|
||||
for: 1h
|
||||
labels:
|
||||
severity: warning
|
||||
annotations:
|
||||
summary: Host unusual network throughput in (instance {{ $labels.instance }})
|
||||
description: Host network interfaces are probably receiving too much data (> 800 MB/s)
|
||||
description: Host network interfaces are probably receiving too much data (> 160 MB/s)
|
||||
- alert: HostUnusualNetworkThroughputOut
|
||||
expr: sum by (instance) (rate(node_network_transmit_bytes_total[2m])) > 800e+06
|
||||
expr: sum by (instance) (rate(node_network_transmit_bytes_total[2m])) > 160e+06
|
||||
for: 1h
|
||||
labels:
|
||||
severity: warning
|
||||
annotations:
|
||||
summary: Host unusual network throughput out (instance {{ $labels.instance }})
|
||||
description: Host network interfaces are probably sending too much data (> 800 MB/s)
|
||||
description: Host network interfaces are probably sending too much data (> 160 MB/s)
|
||||
- alert: HostUnusualDiskReadRate
|
||||
expr: sum by (instance) (rate(node_disk_read_bytes_total[2m])) > 500e+06
|
||||
expr: sum by (instance) (rate(node_disk_read_bytes_total[2m])) > 50000000
|
||||
for: 1h
|
||||
labels:
|
||||
severity: warning
|
||||
annotations:
|
||||
summary: Host unusual disk read rate (instance {{ $labels.instance }})
|
||||
description: Disk is probably reading too much data (> 500 MB/s)
|
||||
description: Disk is probably reading too much data (> 50 MB/s)
|
||||
- alert: HostUnusualDiskWriteRate
|
||||
expr: sum by (instance) (rate(node_disk_written_bytes_total[2m])) > 500e+06
|
||||
expr: sum by (instance) (rate(node_disk_written_bytes_total[2m])) > 50000000
|
||||
for: 1h
|
||||
labels:
|
||||
severity: warning
|
||||
annotations:
|
||||
summary: Host unusual disk write rate (instance {{ $labels.instance }})
|
||||
description: Disk is probably writing too much data (> 500 MB/s)
|
||||
description: Disk is probably writing too much data (> 50 MB/s)
|
||||
# Please add ignored mountpoints in node_exporter parameters like
|
||||
# "--collector.filesystem.ignored-mount-points=^/(sys|proc|dev|run)($|/)".
|
||||
# Same rule using "node_filesystem_free_bytes" will fire when disk fills for non-root users.
|
||||
@ -363,16 +361,12 @@ kind: PodMonitor
|
||||
metadata:
|
||||
name: node-exporter
|
||||
spec:
|
||||
|
||||
selector:
|
||||
matchLabels:
|
||||
app: node-exporter
|
||||
podMetricsEndpoints:
|
||||
- port: web
|
||||
scrapeTimeout: 30s
|
||||
relabelings:
|
||||
- sourceLabels: [__meta_kubernetes_pod_node_name]
|
||||
targetLabel: node
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
@ -406,10 +400,9 @@ spec:
|
||||
- --path.rootfs=/host/root
|
||||
- --no-collector.wifi
|
||||
- --collector.filesystem.mount-points-exclude=^/(dev|proc|sys|var/lib/docker/.+|var/lib/kubelet/pods/.+)($|/)
|
||||
- --collector.netclass.ignored-devices=^(veth|cali|vxlan|cni|vnet|tap|lo|wg)
|
||||
- --collector.netdev.device-exclude=^(veth|cali|vxlan|cni|vnet|tap|lo|wg)
|
||||
- --collector.diskstats.ignored-devices=^(sr[0-9][0-9]*)$
|
||||
image: prom/node-exporter:v1.5.0
|
||||
- --collector.netclass.ignored-devices=^(veth.*|[a-f0-9]{15})$
|
||||
- --collector.netdev.device-exclude=^(veth.*|[a-f0-9]{15})$
|
||||
image: prom/node-exporter:v1.3.1
|
||||
resources:
|
||||
limits:
|
||||
cpu: 50m
|
||||
@ -436,7 +429,6 @@ spec:
|
||||
readOnlyRootFilesystem: true
|
||||
hostNetwork: true
|
||||
hostPID: true
|
||||
priorityClassName: system-node-critical
|
||||
securityContext:
|
||||
runAsNonRoot: true
|
||||
runAsUser: 65534
|
||||
|
@ -79,15 +79,12 @@ spec:
|
||||
prober:
|
||||
url: snmp-exporter:9116
|
||||
path: /snmp
|
||||
metricRelabelings:
|
||||
- sourceLabels: [__name__]
|
||||
regex: '(.*)'
|
||||
replacement: 'snmp_${1}'
|
||||
targetLabel: __name__
|
||||
targets:
|
||||
staticConfig:
|
||||
static:
|
||||
- ups-4.mgmt.k-space.ee
|
||||
- ups-5.mgmt.k-space.ee
|
||||
- ups-6.mgmt.k-space.ee
|
||||
- ups-7.mgmt.k-space.ee
|
||||
- ups-8.mgmt.k-space.ee
|
||||
- ups-9.mgmt.k-space.ee
|
||||
@ -111,7 +108,7 @@ spec:
|
||||
annotations:
|
||||
summary: One or more UPS-es is not in normal operation mode. This either means
|
||||
power is lost or UPS was loaded and it's now in bypass mode.
|
||||
expr: sum(snmp_upsOutputSource { upsOutputSource = 'normal' }) != 4
|
||||
expr: sum(snmp_upsOutputSource { upsOutputSource = 'normal' }) < 6
|
||||
for: 1m
|
||||
labels:
|
||||
severity: critical
|
||||
@ -135,11 +132,6 @@ spec:
|
||||
prober:
|
||||
url: snmp-exporter:9116
|
||||
path: /snmp
|
||||
metricRelabelings:
|
||||
- sourceLabels: [__name__]
|
||||
regex: '(.*)'
|
||||
replacement: 'snmp_${1}'
|
||||
targetLabel: __name__
|
||||
targets:
|
||||
staticConfig:
|
||||
static:
|
||||
@ -174,11 +166,6 @@ spec:
|
||||
prober:
|
||||
url: snmp-exporter:9116
|
||||
path: /snmp
|
||||
metricRelabelings:
|
||||
- sourceLabels: [__name__]
|
||||
regex: '(.*)'
|
||||
replacement: 'snmp_${1}'
|
||||
targetLabel: __name__
|
||||
targets:
|
||||
staticConfig:
|
||||
static:
|
||||
|
@ -33,7 +33,6 @@ epson_beamer:
|
||||
type: gauge
|
||||
|
||||
printer_mib:
|
||||
version: 1
|
||||
walk:
|
||||
- 1.3.6.1.2.1.25.3.5.1.1
|
||||
- 1.3.6.1.2.1.43.11.1.1.5
|
||||
|
@ -1,55 +0,0 @@
|
||||
---
|
||||
apiVersion: storage.k8s.io/v1
|
||||
kind: StorageClass
|
||||
metadata:
|
||||
name: mongo
|
||||
provisioner: rawfile.csi.openebs.io
|
||||
reclaimPolicy: Retain
|
||||
volumeBindingMode: WaitForFirstConsumer
|
||||
allowVolumeExpansion: true
|
||||
parameters:
|
||||
fsType: "xfs"
|
||||
---
|
||||
apiVersion: storage.k8s.io/v1
|
||||
kind: StorageClass
|
||||
metadata:
|
||||
name: minio
|
||||
provisioner: rawfile.csi.openebs.io
|
||||
reclaimPolicy: Retain
|
||||
volumeBindingMode: WaitForFirstConsumer
|
||||
allowVolumeExpansion: true
|
||||
parameters:
|
||||
fsType: "xfs"
|
||||
---
|
||||
apiVersion: storage.k8s.io/v1
|
||||
kind: StorageClass
|
||||
metadata:
|
||||
name: prometheus
|
||||
provisioner: rawfile.csi.openebs.io
|
||||
reclaimPolicy: Retain
|
||||
volumeBindingMode: WaitForFirstConsumer
|
||||
allowVolumeExpansion: true
|
||||
parameters:
|
||||
fsType: "xfs"
|
||||
---
|
||||
apiVersion: storage.k8s.io/v1
|
||||
kind: StorageClass
|
||||
metadata:
|
||||
name: postgres
|
||||
provisioner: rawfile.csi.openebs.io
|
||||
reclaimPolicy: Retain
|
||||
volumeBindingMode: WaitForFirstConsumer
|
||||
allowVolumeExpansion: true
|
||||
parameters:
|
||||
fsType: "xfs"
|
||||
---
|
||||
apiVersion: storage.k8s.io/v1
|
||||
kind: StorageClass
|
||||
metadata:
|
||||
name: mysql
|
||||
provisioner: rawfile.csi.openebs.io
|
||||
reclaimPolicy: Retain
|
||||
volumeBindingMode: WaitForFirstConsumer
|
||||
allowVolumeExpansion: true
|
||||
parameters:
|
||||
fsType: "xfs"
|
@ -5,6 +5,5 @@ Calico implements the inter-pod overlay network
|
||||
```
|
||||
curl https://projectcalico.docs.tigera.io/manifests/tigera-operator.yaml -O
|
||||
curl https://projectcalico.docs.tigera.io/manifests/custom-resources.yaml -O
|
||||
kubectl apply -f custom-resources.yaml
|
||||
kubectl replace -f tigera-operator.yaml
|
||||
kubectl apply -f tigera-operator.yaml -f custom-resources.yaml
|
||||
```
|
||||
|
64
tigera-operator/cleanup.sh
Normal file
64
tigera-operator/cleanup.sh
Normal file
@ -0,0 +1,64 @@
|
||||
#!/bin/bash
|
||||
|
||||
NAMESPACE=${NAMESPACE:-longhorn-system}
|
||||
|
||||
remove_and_wait() {
|
||||
local crd=$1
|
||||
out=`kubectl -n ${NAMESPACE} delete $crd --all 2>&1`
|
||||
if [ $? -ne 0 ]; then
|
||||
echo $out
|
||||
return
|
||||
fi
|
||||
while true; do
|
||||
out=`kubectl -n ${NAMESPACE} get $crd -o yaml | grep 'items: \[\]'`
|
||||
if [ $? -eq 0 ]; then
|
||||
break
|
||||
fi
|
||||
sleep 1
|
||||
done
|
||||
echo all $crd instances deleted
|
||||
}
|
||||
|
||||
remove_crd_instances() {
|
||||
remove_and_wait volumes.longhorn.rancher.io
|
||||
# TODO: remove engines and replicas once we fix https://github.com/rancher/longhorn/issues/273
|
||||
remove_and_wait engines.longhorn.rancher.io
|
||||
remove_and_wait replicas.longhorn.rancher.io
|
||||
remove_and_wait engineimages.longhorn.rancher.io
|
||||
remove_and_wait settings.longhorn.rancher.io
|
||||
# do this one last; manager crashes
|
||||
remove_and_wait nodes.longhorn.rancher.io
|
||||
}
|
||||
|
||||
# Delete driver related workloads in specific order
|
||||
remove_driver() {
|
||||
kubectl -n ${NAMESPACE} delete deployment.apps/longhorn-driver-deployer
|
||||
kubectl -n ${NAMESPACE} delete daemonset.apps/longhorn-csi-plugin
|
||||
kubectl -n ${NAMESPACE} delete statefulset.apps/csi-attacher
|
||||
kubectl -n ${NAMESPACE} delete service/csi-attacher
|
||||
kubectl -n ${NAMESPACE} delete statefulset.apps/csi-provisioner
|
||||
kubectl -n ${NAMESPACE} delete service/csi-provisioner
|
||||
kubectl -n ${NAMESPACE} delete daemonset.apps/longhorn-flexvolume-driver
|
||||
}
|
||||
|
||||
# Delete all workloads in the namespace
|
||||
remove_workloads() {
|
||||
kubectl -n ${NAMESPACE} get daemonset.apps -o yaml | kubectl delete -f -
|
||||
kubectl -n ${NAMESPACE} get deployment.apps -o yaml | kubectl delete -f -
|
||||
kubectl -n ${NAMESPACE} get replicaset.apps -o yaml | kubectl delete -f -
|
||||
kubectl -n ${NAMESPACE} get statefulset.apps -o yaml | kubectl delete -f -
|
||||
kubectl -n ${NAMESPACE} get pods -o yaml | kubectl delete -f -
|
||||
kubectl -n ${NAMESPACE} get service -o yaml | kubectl delete -f -
|
||||
}
|
||||
|
||||
# Delete CRD definitions with longhorn.rancher.io in the name
|
||||
remove_crds() {
|
||||
for crd in $(kubectl get crd -o jsonpath={.items[*].metadata.name} | tr ' ' '\n' | grep longhorn.rancher.io); do
|
||||
kubectl delete crd/$crd
|
||||
done
|
||||
}
|
||||
|
||||
remove_crd_instances
|
||||
remove_driver
|
||||
remove_workloads
|
||||
remove_crds
|
@ -1,5 +1,5 @@
|
||||
# This section includes base Calico installation configuration.
|
||||
# For more information, see: https://projectcalico.docs.tigera.io/master/reference/installation/api#operator.tigera.io/v1.Installation
|
||||
# For more information, see: https://projectcalico.docs.tigera.io/v3.23/reference/installation/api#operator.tigera.io/v1.Installation
|
||||
apiVersion: operator.tigera.io/v1
|
||||
kind: Installation
|
||||
metadata:
|
||||
@ -10,7 +10,7 @@ spec:
|
||||
# Note: The ipPools section cannot be modified post-install.
|
||||
ipPools:
|
||||
- blockSize: 26
|
||||
cidr: 10.244.0.0/16
|
||||
cidr: 192.168.0.0/16
|
||||
encapsulation: VXLANCrossSubnet
|
||||
natOutgoing: Enabled
|
||||
nodeSelector: all()
|
||||
@ -18,7 +18,7 @@ spec:
|
||||
---
|
||||
|
||||
# This section configures the Calico API server.
|
||||
# For more information, see: https://projectcalico.docs.tigera.io/master/reference/installation/api#operator.tigera.io/v1.APIServer
|
||||
# For more information, see: https://projectcalico.docs.tigera.io/v3.23/reference/installation/api#operator.tigera.io/v1.APIServer
|
||||
apiVersion: operator.tigera.io/v1
|
||||
kind: APIServer
|
||||
metadata:
|
||||
|
File diff suppressed because it is too large
Load Diff
@ -64,16 +64,8 @@ spec:
|
||||
number: 9000
|
||||
tls:
|
||||
- hosts:
|
||||
- "*.k-space.ee"
|
||||
secretName: wildcard-tls
|
||||
---
|
||||
apiVersion: traefik.containo.us/v1alpha1
|
||||
kind: TLSStore
|
||||
metadata:
|
||||
name: default
|
||||
spec:
|
||||
defaultCertificate:
|
||||
secretName: wildcard-tls
|
||||
- traefik.k-space.ee
|
||||
secretName: traefik-tls
|
||||
---
|
||||
apiVersion: traefik.containo.us/v1alpha1
|
||||
kind: Middleware
|
||||
|
@ -104,6 +104,7 @@ metadata:
|
||||
name: pve
|
||||
annotations:
|
||||
kubernetes.io/ingress.class: traefik
|
||||
cert-manager.io/cluster-issuer: default
|
||||
external-dns.alpha.kubernetes.io/target: traefik.k-space.ee
|
||||
traefik.ingress.kubernetes.io/router.entrypoints: websecure
|
||||
traefik.ingress.kubernetes.io/router.middlewares: traefik-sso@kubernetescrd,traefik-proxmox-redirect@kubernetescrd
|
||||
@ -146,7 +147,9 @@ spec:
|
||||
number: 8006
|
||||
tls:
|
||||
- hosts:
|
||||
- "*.k-space.ee"
|
||||
- pve.k-space.ee
|
||||
- proxmox.k-space.ee
|
||||
secretName: pve-tls
|
||||
---
|
||||
apiVersion: traefik.containo.us/v1alpha1
|
||||
kind: Middleware
|
||||
|
@ -1,16 +1,12 @@
|
||||
image:
|
||||
tag: "2.9"
|
||||
tag: "2.8"
|
||||
|
||||
websecure:
|
||||
tls:
|
||||
enabled: true
|
||||
|
||||
providers:
|
||||
kubernetesCRD:
|
||||
enabled: true
|
||||
|
||||
kubernetesIngress:
|
||||
allowEmptyServices: true
|
||||
allowExternalNameServices: true
|
||||
|
||||
deployment:
|
||||
|
@ -17,6 +17,7 @@ metadata:
|
||||
name: voron
|
||||
annotations:
|
||||
kubernetes.io/ingress.class: traefik
|
||||
cert-manager.io/cluster-issuer: default
|
||||
traefik.ingress.kubernetes.io/router.entrypoints: websecure
|
||||
traefik.ingress.kubernetes.io/router.middlewares: traefik-sso@kubernetescrd
|
||||
traefik.ingress.kubernetes.io/router.tls: "true"
|
||||
@ -35,4 +36,5 @@ spec:
|
||||
name: http
|
||||
tls:
|
||||
- hosts:
|
||||
- "*.k-space.ee"
|
||||
- voron.k-space.ee
|
||||
secretName: voron-tls
|
||||
|
@ -41,6 +41,7 @@ kind: Ingress
|
||||
metadata:
|
||||
name: whoami
|
||||
annotations:
|
||||
cert-manager.io/cluster-issuer: default
|
||||
external-dns.alpha.kubernetes.io/target: traefik.k-space.ee
|
||||
kubernetes.io/ingress.class: traefik
|
||||
traefik.ingress.kubernetes.io/router.entrypoints: websecure
|
||||
@ -49,7 +50,8 @@ metadata:
|
||||
spec:
|
||||
tls:
|
||||
- hosts:
|
||||
- "*.k-space.ee"
|
||||
- "whoami.k-space.ee"
|
||||
secretName: whoami-tls
|
||||
rules:
|
||||
- host: "whoami.k-space.ee"
|
||||
http:
|
||||
|
@ -104,6 +104,7 @@ metadata:
|
||||
namespace: wildduck
|
||||
annotations:
|
||||
kubernetes.io/ingress.class: traefik
|
||||
cert-manager.io/cluster-issuer: default
|
||||
traefik.ingress.kubernetes.io/router.entrypoints: websecure
|
||||
traefik.ingress.kubernetes.io/router.middlewares: traefik-sso@kubernetescrd
|
||||
traefik.ingress.kubernetes.io/router.tls: "true"
|
||||
@ -122,7 +123,8 @@ spec:
|
||||
number: 80
|
||||
tls:
|
||||
- hosts:
|
||||
- "*.k-space.ee"
|
||||
- webmail.k-space.ee
|
||||
secretName: webmail-tls
|
||||
---
|
||||
apiVersion: codemowers.io/v1alpha1
|
||||
kind: KeyDBCluster
|
||||
|
Loading…
Reference in New Issue
Block a user