41 Commits

Author SHA1 Message Date
6b635b6dc7 proxmox: first attempt to move to ingressroute 2022-11-04 11:39:36 +02:00
1bcfbed130 traefik: Bump version 2022-10-21 08:30:04 +03:00
3b1cda8a58 traefik: Pull resources only from trusted namespaces 2022-10-21 08:27:53 +03:00
2fd0112c28 elastic-system: Exclude logging ECK stack itself 2022-10-21 00:57:11 +03:00
9275f745ce elastic-system: Remove Filebeat's dependency on Kibana 2022-10-21 00:56:54 +03:00
3d86b6acde elastic-system: Bump to 8.4.3 2022-10-14 20:18:28 +03:00
4a94cd4af0 longhorn-system: Remove Prometheus annotation as we use PodMonitor already
All checks were successful
continuous-integration/drone Build is passing
2022-10-14 15:03:48 +03:00
a27f273c0b Add Grafana 2022-10-14 14:38:23 +03:00
4686108f42 Switch to wildcard *.k-space.ee certificate 2022-10-14 14:32:36 +03:00
30b7e50afb kube-system: Add metrics-server 2022-10-14 14:23:21 +03:00
e4c9675b99 tigera-operator: Remove unrelated files 2022-10-14 14:05:40 +03:00
017bdd9fd8 tigera-operator: Upgrade Calico 2022-10-14 14:03:34 +03:00
0fd0094ba0 playground: Initial commit 2022-10-14 00:14:35 +03:00
d20fdf350d drone: Switch templates to drone-kaniko plugin 2022-10-12 14:24:57 +03:00
bac5040d2a README: access/auth: collapse bootstrapping
For 'how to connect to cluster', server-side setup
is not needed from connecting clients.
Hiding the section makes the steps more concise.
2022-10-11 10:47:41 +03:00
Danyliuk
4d5851259d Update .gitignore file. Add IntelliJ IDEA part 2022-10-08 16:43:48 +00:00
8ee1896a55 harbor: Move to storage nodes 2022-10-04 13:39:25 +03:00
04b786b18d prometheus-operator: Bump blackbox exporter replica count to 3 2022-10-04 10:11:53 +03:00
1d1764093b prometheus-operator: Remove pulled UPS-es 2022-10-03 10:04:24 +03:00
df6e268eda elastic-system: Add PodMonitor for exporter 2022-09-30 10:33:41 +03:00
00f8bfef6c elastic-system: Update sharding, enable memory-mapped IO, move to Longhorn 2022-09-30 10:21:10 +03:00
109859e07b elastic-system: Reduce replica count for Kibana 2022-09-28 11:01:08 +03:00
7e518da638 elastic-system: Make Kibana healthcheck work with anonymous auth 2022-09-28 11:00:38 +03:00
5ef5e14866 prometheus-operator: Specify priorityClassName: system-node-critical for node-exporters 2022-09-28 10:33:44 +03:00
310b2faaef prometheus-operator: Add node label to node-exporters 2022-09-28 09:32:31 +03:00
6b65de65d4 Move kube-state-metrics 2022-09-26 15:50:58 +03:00
02d1236eba elastic-system: Add Syslog ingestion 2022-09-23 16:37:29 +03:00
610ce0d490 elastic-system: Bump version to 2.4.0 2022-09-23 16:16:22 +03:00
051e300359 Update tech mapping 2022-09-21 17:12:24 +03:00
5b11b7f3a6 phpmyadmin: Use 6446 for MySQL Operator instances 2022-09-21 11:38:13 +03:00
546dc71450 prometheus-operator: Fix SNMP for older HP printers 2022-09-20 23:26:09 +03:00
26a35cd0c3 prometheus-operator: Add snmp_ prefix 2022-09-20 17:09:26 +03:00
790ffa175b prometheus-operator: Fix Alertmanager integration
All checks were successful
continuous-integration/drone Build is passing
2022-09-20 12:22:49 +03:00
9a672d7ef3 logging: Bump ZincSearch memory limit 2022-09-18 10:05:54 +03:00
d1cb00ff83 Reduce Filebeat logging verbosity 2022-09-17 08:06:42 +03:00
9cc39fcd17 argocd: Add members repo 2022-09-17 08:06:19 +03:00
ae8d03ec03 argocd: Add elastic-system 2022-09-17 08:05:47 +03:00
bf9d063b2c mysql-operator: Bump to version 8.0.30-2.0.6 2022-09-16 08:41:07 +03:00
2efaf7b456 mysql-operator: Fix network policy 2022-09-16 08:40:31 +03:00
c4208037e2 logging: Replace Graylog with ZincSearch 2022-09-16 08:34:53 +03:00
edcb6399df elastic-system: Fixes and cleanups 2022-09-16 08:24:13 +03:00
53 changed files with 13852 additions and 1448 deletions

4
.gitignore vendored
View File

@@ -3,3 +3,7 @@
*.swp
*.save
*.1
### IntelliJ IDEA ###
.idea
*.iml

View File

@@ -23,6 +23,7 @@ Most endpoints are protected by OIDC autentication or Authelia SSO middleware.
General discussion is happening in the `#kube` Slack channel.
<details><summary>Bootstrapping access</summary>
For bootstrap access obtain `/etc/kubernetes/admin.conf` from one of the master
nodes and place it under `~/.kube/config` on your machine.
@@ -46,9 +47,9 @@ EOF
sudo systemctl daemon-reload
systemctl restart kubelet
```
</details>
Afterwards following can be used to talk to the Kubernetes cluster using
OIDC credentials:
The following can be used to talk to the Kubernetes cluster using OIDC credentials:
```bash
kubectl krew install oidc-login
@@ -89,28 +90,41 @@ EOF
For access control mapping see [cluster-role-bindings.yml](cluster-role-bindings.yml)
### systemd-resolved issues on access
```sh
Unable to connect to the server: dial tcp: lookup master.kube.k-space.ee on 127.0.0.53:53: no such host
```
```
Network → VPN → `IPv4` → Other nameservers (Muud nimeserverid): `172.21.0.1`
Network → VPN → `IPv6` → Other nameservers (Muud nimeserverid): `2001:bb8:4008:21::1`
Network → VPN → `IPv4` → Search domains (Otsingudomeenid): `kube.k-space.ee`
Network → VPN → `IPv6` → Search domains (Otsingudomeenid): `kube.k-space.ee`
```
# Technology mapping
Our self-hosted Kubernetes stack compared to AWS based deployments:
| Hipster startup | Self-hosted hackerspace | Purpose |
|-----------------|-------------------------------------|---------------------------------------------------------------------|
| AWS EC2 | Proxmox | Virtualization layer |
| AWS EKS | kubeadm | Provision Kubernetes master nodes |
| AWS EBS | Longhorn | Block storage for arbitrary applications needing persistent storage |
| AWS NLB | MetalLB | L2/L3 level load balancing |
| AWS ALB | Traefik | Reverse proxy also known as ingress controller in Kubernetes jargon |
| AWS ECR | Harbor | Docker registry |
| AWS DocumentDB | MongoDB | NoSQL database |
| AWS S3 | Minio | Object storage |
| GitHub OAuth2 | Samba (Active Directory compatible) | Source of truth for authentication and authorization |
| Dex | Authelia | ACL mapping and OIDC provider which integrates with GitHub/Samba |
| GitHub | Gitea | Source code management, issue tracking |
| GitHub Actions | Drone | Build Docker images |
| Gmail | Wildduck | E-mail |
| AWS Route53 | Bind and RFC2136 | DNS records and Let's Encrypt DNS validation |
| AWS VPC | Calico | Overlay network |
| Hipster startup | Self-hosted hackerspace | Purpose |
|-------------------|-------------------------------------|---------------------------------------------------------------------|
| AWS ALB | Traefik | Reverse proxy also known as ingress controller in Kubernetes jargon |
| AWS AMP | Prometheus Operator | Monitoring and alerting |
| AWS CloudTrail | ECK Operator | Log aggregation |
| AWS DocumentDB | MongoDB Community Operator | Highly available NoSQL database |
| AWS EBS | Longhorn | Block storage for arbitrary applications needing persistent storage |
| AWS EC2 | Proxmox | Virtualization layer |
| AWS ECR | Harbor | Docker registry |
| AWS EKS | kubeadm | Provision Kubernetes master nodes |
| AWS NLB | MetalLB | L2/L3 level load balancing |
| AWS RDS for MySQL | MySQL Operator | Provision highly available relational databases |
| AWS Route53 | Bind and RFC2136 | DNS records and Let's Encrypt DNS validation |
| AWS S3 | Minio Operator | Highly available object storage |
| AWS VPC | Calico | Overlay network |
| Dex | Authelia | ACL mapping and OIDC provider which integrates with GitHub/Samba |
| GitHub Actions | Drone | Build Docker images |
| GitHub | Gitea | Source code management, issue tracking |
| GitHub OAuth2 | Samba (Active Directory compatible) | Source of truth for authentication and authorization |
| Gmail | Wildduck | E-mail |
External dependencies running as classic virtual machines:

View File

@@ -36,8 +36,13 @@ kubectl -n argocd create secret generic gitea-kube-staging \
--from-literal=type=git \
--from-literal=url=git@git.k-space.ee:k-space/kube-staging \
--from-file=sshPrivateKey=id_ecdsa
kubectl -n argocd create secret generic gitea-kube-members \
--from-literal=type=git \
--from-literal=url=git@git.k-space.ee:k-space/kube-members \
--from-file=sshPrivateKey=id_ecdsa
kubectl label -n argocd secret gitea-kube argocd.argoproj.io/secret-type=repository
kubectl label -n argocd secret gitea-kube-staging argocd.argoproj.io/secret-type=repository
kubectl label -n argocd secret gitea-kube-members argocd.argoproj.io/secret-type=repository
rm -fv id_ecdsa
```

View File

@@ -5,17 +5,16 @@ metadata:
namespace: argocd
spec:
project: default
destination:
server: 'https://kubernetes.default.svc'
namespace: elastic-system
syncPolicy:
automated: {}
syncOptions:
- CreateNamespace=true
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: elastic-system
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: elastic-system
syncPolicy:
syncOptions:
- CreateNamespace=true
ignoreDifferences:
- group: admissionregistration.k8s.io
kind: ValidatingWebhookConfiguration

View File

@@ -1,17 +1,17 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: foobar
name: grafana
namespace: argocd
spec:
project: default
source:
repoURL: 'git@git.k-space.ee:k-space/kube.git'
path: foobar
path: grafana
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: foobar
namespace: grafana
syncPolicy:
syncOptions:
- CreateNamespace=true

View File

@@ -0,0 +1,17 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: members
namespace: argocd
spec:
project: default
source:
repoURL: 'git@git.k-space.ee:k-space/kube-members.git'
path: .
targetRevision: HEAD
destination:
server: 'https://kubernetes.default.svc'
namespace: members
syncPolicy:
syncOptions:
- CreateNamespace=true

View File

@@ -16,7 +16,6 @@ server:
ingress:
enabled: true
annotations:
cert-manager.io/cluster-issuer: default
external-dns.alpha.kubernetes.io/target: traefik.k-space.ee
traefik.ingress.kubernetes.io/router.entrypoints: websecure
traefik.ingress.kubernetes.io/router.tls: "true"
@@ -24,8 +23,7 @@ server:
- argocd.k-space.ee
tls:
- hosts:
- argocd.k-space.ee
secretName: argocd-server-tls
- "*.k-space.ee"
configEnabled: true
config:
admin.enabled: "false"

View File

@@ -162,8 +162,8 @@ kubectl -n argocd create secret generic argocd-secret \
kubectl get secret -n authelia oidc-secrets -o json \
| jq '.data."oidc-secrets.yml"' -r | base64 -d | yq -o json \
| jq '.identity_providers.oidc.clients[] | select(.id == "argocd") | .secret' -r)
kubectl -n monitoring delete secret oidc-secret
kubectl -n monitoring create secret generic oidc-secret \
kubectl -n grafana delete secret oidc-secret
kubectl -n grafana create secret generic oidc-secret \
--from-literal=GF_AUTH_GENERIC_OAUTH_CLIENT_SECRET=$( \
kubectl get secret -n authelia oidc-secrets -o json \
| jq '.data."oidc-secrets.yml"' -r | base64 -d | yq -o json \

View File

@@ -295,7 +295,6 @@ metadata:
labels:
app.kubernetes.io/name: authelia
annotations:
cert-manager.io/cluster-issuer: default
external-dns.alpha.kubernetes.io/target: traefik.k-space.ee
kubernetes.io/tls-acme: "true"
traefik.ingress.kubernetes.io/router.entryPoints: websecure
@@ -315,8 +314,7 @@ spec:
number: 80
tls:
- hosts:
- auth.k-space.ee
secretName: authelia-tls
- "*.k-space.ee"
---
apiVersion: traefik.containo.us/v1alpha1
kind: Middleware

View File

@@ -182,12 +182,6 @@ metadata:
annotations:
kubernetes.io/ingress.class: traefik
# Following specifies the certificate issuer defined in
# ../cert-manager/issuer.yml
# This is where the HTTPS certificates for the
# `tls:` section below are obtained from
cert-manager.io/cluster-issuer: default
# This tells Traefik this Ingress object is associated with the
# https:// entrypoint
# Global http:// to https:// redirect is enabled in
@@ -234,8 +228,7 @@ spec:
number: 3003
tls:
- hosts:
- cams.k-space.ee
secretName: camtiler-tls
- "*.k-space.ee"
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
@@ -371,7 +364,6 @@ metadata:
name: minio
annotations:
kubernetes.io/ingress.class: traefik
cert-manager.io/cluster-issuer: default
traefik.ingress.kubernetes.io/router.entrypoints: websecure
traefik.ingress.kubernetes.io/router.tls: "true"
external-dns.alpha.kubernetes.io/target: traefik.k-space.ee
@@ -389,8 +381,7 @@ spec:
number: 80
tls:
- hosts:
- cams-s3.k-space.ee
secretName: cams-s3-tls
- "*.k-space.ee"
---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition

View File

@@ -77,14 +77,11 @@ steps:
- echo "ENV GIT_COMMIT_TIMESTAMP=$(git log -1 --format=%cd --date=iso-strict)" >> Dockerfile
- cat Dockerfile
- name: docker
image: plugins/docker
image: harbor.k-space.ee/k-space/drone-kaniko
settings:
repo: harbor.k-space.ee/${DRONE_REPO}
repo: ${DRONE_REPO}
tags: latest-arm64
registry: harbor.k-space.ee
squash: true
experimental: true
mtu: 1300
username:
from_secret: docker_username
password:
@@ -109,14 +106,11 @@ steps:
- echo "ENV GIT_COMMIT_TIMESTAMP=$(git log -1 --format=%cd --date=iso-strict)" >> Dockerfile
- cat Dockerfile
- name: docker
image: plugins/docker
image: harbor.k-space.ee/k-space/drone-kaniko
settings:
repo: harbor.k-space.ee/${DRONE_REPO}
repo: ${DRONE_REPO}
tags: latest-amd64
registry: harbor.k-space.ee
squash: true
experimental: true
mtu: 1300
storage_driver: vfs
username:
from_secret: docker_username
@@ -130,8 +124,8 @@ steps:
- name: manifest
image: plugins/manifest
settings:
target: harbor.k-space.ee/${DRONE_REPO}:latest
template: harbor.k-space.ee/${DRONE_REPO}:latest-ARCH
target: ${DRONE_REPO}:latest
template: ${DRONE_REPO}:latest-ARCH
platforms:
- linux/amd64
- linux/arm64

View File

@@ -83,7 +83,6 @@ kind: Ingress
metadata:
name: drone
annotations:
cert-manager.io/cluster-issuer: default
external-dns.alpha.kubernetes.io/target: traefik.k-space.ee
kubernetes.io/ingress.class: traefik
traefik.ingress.kubernetes.io/router.entrypoints: websecure
@@ -91,8 +90,7 @@ metadata:
spec:
tls:
- hosts:
- "drone.k-space.ee"
secretName: drone-tls
- "*.k-space.ee"
rules:
- host: "drone.k-space.ee"
http:

View File

@@ -1,7 +1,7 @@
# elastic-operator
```
wget https://download.elastic.co/downloads/eck/2.2.0/crds.yaml
wget https://download.elastic.co/downloads/eck/2.2.0/operator.yaml
wget https://download.elastic.co/downloads/eck/2.4.0/crds.yaml
wget https://download.elastic.co/downloads/eck/2.4.0/operator.yaml
kubectl apply -n elastic-system -f application.yml -f crds.yaml -f operator.yaml
```

View File

@@ -1,15 +1,16 @@
---
apiVersion: beat.k8s.elastic.co/v1beta1
kind: Beat
metadata:
name: filebeat
spec:
type: filebeat
version: 8.4.1
version: 8.4.3
elasticsearchRef:
name: elasticsearch
kibanaRef:
name: kibana
config:
logging:
level: warning
http:
enabled: true
port: 5066
@@ -24,50 +25,15 @@ spec:
type: container
paths:
- /var/log/containers/*${data.kubernetes.container.id}.log
processors:
- drop_fields:
fields:
- stream
- target
- host
ignore_missing: true
- rename:
fields:
- from: "kubernetes.node.name"
to: "host"
- from: "kubernetes.pod.name"
to: "pod"
- from: "kubernetes.labels.app"
to: "app"
- from: "kubernetes.namespace"
to: "namespace"
ignore_missing: true
- drop_fields:
fields:
- input
- agent
- container
- ecs
- host
- kubernetes
- log
- "@metadata"
ignore_missing: true
- decode_json_fields:
fields:
- message
max_depth: 2
expand_keys: true
target: ""
add_error_key: true
daemonSet:
podTemplate:
metadata:
annotations:
co.elastic.logs/enabled: 'false'
spec:
serviceAccountName: filebeat
automountServiceAccountToken: true
terminationGracePeriodSeconds: 30
dnsPolicy: ClusterFirstWithHostNet
hostNetwork: true # Allows to provide richer host metadata
containers:
- name: filebeat
securityContext:
@@ -84,6 +50,12 @@ spec:
valueFrom:
fieldRef:
fieldPath: spec.nodeName
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 100Mi
- name: exporter
image: sepa/beats-exporter
args:
@@ -108,6 +80,104 @@ spec:
- operator: "Exists"
effect: "NoSchedule"
---
apiVersion: beat.k8s.elastic.co/v1beta1
kind: Beat
metadata:
name: filebeat-syslog
spec:
type: filebeat
version: 8.4.3
elasticsearchRef:
name: elasticsearch
config:
logging:
level: warning
http:
enabled: true
port: 5066
filebeat:
inputs:
- type: syslog
format: rfc5424
protocol.udp:
host: "0.0.0.0:1514"
- type: syslog
format: rfc5424
protocol.tcp:
host: "0.0.0.0:1514"
deployment:
replicas: 2
podTemplate:
metadata:
annotations:
co.elastic.logs/enabled: 'false'
spec:
terminationGracePeriodSeconds: 30
containers:
- name: filebeat
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 100Mi
ports:
- containerPort: 1514
name: syslog
protocol: UDP
volumeMounts:
- name: filebeat-registry
mountPath: /usr/share/filebeat/data
- name: exporter
image: sepa/beats-exporter
args:
- -p=5066
ports:
- containerPort: 8080
name: exporter
protocol: TCP
volumes:
- name: filebeat-registry
emptyDir: {}
---
apiVersion: v1
kind: Service
metadata:
name: filebeat-syslog-udp
annotations:
external-dns.alpha.kubernetes.io/hostname: syslog.k-space.ee
metallb.universe.tf/allow-shared-ip: syslog.k-space.ee
spec:
type: LoadBalancer
externalTrafficPolicy: Local
loadBalancerIP: 172.20.51.4
ports:
- name: filebeat-syslog
port: 514
protocol: UDP
targetPort: 1514
selector:
beat.k8s.elastic.co/name: filebeat-syslog
---
apiVersion: v1
kind: Service
metadata:
name: filebeat-syslog-tcp
annotations:
external-dns.alpha.kubernetes.io/hostname: syslog.k-space.ee
metallb.universe.tf/allow-shared-ip: syslog.k-space.ee
spec:
type: LoadBalancer
externalTrafficPolicy: Local
loadBalancerIP: 172.20.51.4
ports:
- name: filebeat-syslog
port: 514
protocol: TCP
targetPort: 1514
selector:
beat.k8s.elastic.co/name: filebeat-syslog
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
@@ -148,12 +218,10 @@ kind: Elasticsearch
metadata:
name: elasticsearch
spec:
version: 8.4.1
version: 8.4.3
nodeSets:
- name: default
count: 3
config:
node.store.allow_mmap: false
count: 1
volumeClaimTemplates:
- metadata:
name: elasticsearch-data
@@ -163,7 +231,7 @@ spec:
resources:
requests:
storage: 5Gi
storageClassName: local-path
storageClassName: longhorn
http:
tls:
selfSignedCertificate:
@@ -174,8 +242,8 @@ kind: Kibana
metadata:
name: kibana
spec:
version: 8.4.1
count: 2
version: 8.4.3
count: 1
elasticsearchRef:
name: elasticsearch
http:
@@ -196,6 +264,23 @@ spec:
entries:
- key: elastic
path: xpack.security.authc.providers.anonymous.anonymous1.credentials.password
podTemplate:
metadata:
annotations:
co.elastic.logs/enabled: 'false'
spec:
containers:
- name: kibana
readinessProbe:
httpGet:
path: /app/home
port: 5601
scheme: HTTP
initialDelaySeconds: 10
timeoutSeconds: 5
periodSeconds: 10
successThreshold: 1
failureThreshold: 3
---
apiVersion: networking.k8s.io/v1
kind: Ingress
@@ -203,7 +288,6 @@ metadata:
name: kibana
annotations:
kubernetes.io/ingress.class: traefik
cert-manager.io/cluster-issuer: default
traefik.ingress.kubernetes.io/router.entrypoints: websecure
traefik.ingress.kubernetes.io/router.middlewares: traefik-sso@kubernetescrd
traefik.ingress.kubernetes.io/router.tls: "true"
@@ -222,5 +306,26 @@ spec:
number: 5601
tls:
- hosts:
- kibana.k-space.ee
secretName: kibana-tls
- "*.k-space.ee"
---
apiVersion: monitoring.coreos.com/v1
kind: PodMonitor
metadata:
name: filebeat
spec:
selector:
matchLabels:
beat.k8s.elastic.co/name: filebeat
podMetricsEndpoints:
- port: exporter
---
apiVersion: monitoring.coreos.com/v1
kind: PodMonitor
metadata:
name: elasticsearch
spec:
selector:
matchLabels:
app.kubernetes.io/name: elasticsearch-exporter
podMetricsEndpoints:
- port: exporter

View File

@@ -3,12 +3,12 @@ apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.8.0
controller-gen.kubebuilder.io/version: v0.9.1
creationTimestamp: null
labels:
app.kubernetes.io/instance: 'elastic-operator'
app.kubernetes.io/name: 'eck-operator-crds'
app.kubernetes.io/version: '2.2.0'
app.kubernetes.io/version: '2.4.0'
name: agents.agent.k8s.elastic.co
spec:
group: agent.k8s.elastic.co
@@ -203,7 +203,7 @@ spec:
description: Spec is the specification of the service.
properties:
allocateLoadBalancerNodePorts:
description: allocateLoadBalancerNodePorts defines if NodePorts will be automatically allocated for services with type LoadBalancer. Default is "true". It may be set to "false" if the cluster load-balancer does not rely on NodePorts. If the caller requests specific NodePorts (by specifying a value), those requests will be respected, regardless of this field. This field may only be set for services with type LoadBalancer and will be cleared if the type is changed to any other type. This field is beta-level and is only honored by servers that enable the ServiceLBNodePortControl feature.
description: allocateLoadBalancerNodePorts defines if NodePorts will be automatically allocated for services with type LoadBalancer. Default is "true". It may be set to "false" if the cluster load-balancer does not rely on NodePorts. If the caller requests specific NodePorts (by specifying a value), those requests will be respected, regardless of this field. This field may only be set for services with type LoadBalancer and will be cleared if the type is changed to any other type.
type: boolean
clusterIP:
description: 'clusterIP is the IP address of the service and is usually assigned randomly. If an address is specified manually, is in-range (as per system configuration), and is not in use, it will be allocated to the service; otherwise creation of the service will fail. This field may not be changed through updates unless the type field is also being changed to ExternalName (which requires this field to be blank) or the type field is being changed from ExternalName (in which case this field may optionally be specified, as describe above). Valid values are "None", empty string (""), or a valid IP address. Setting this to "None" makes a "headless service" (no virtual IP), which is useful when direct endpoint connections are preferred and proxying is not required. Only applies to types ClusterIP, NodePort, and LoadBalancer. If this field is specified when creating a Service of type ExternalName, creation will fail. This field will be wiped when updating a Service to type ExternalName. More info: https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies'
@@ -246,7 +246,7 @@ spec:
description: loadBalancerClass is the class of the load balancer implementation this Service belongs to. If specified, the value of this field must be a label-style identifier, with an optional prefix, e.g. "internal-vip" or "example.com/internal-vip". Unprefixed names are reserved for end-users. This field can only be set when the Service type is 'LoadBalancer'. If not set, the default load balancer implementation is used, today this is typically done through the cloud provider integration, but should apply for any default implementation. If set, it is assumed that a load balancer implementation is watching for Services with a matching class. Any default load balancer implementation (e.g. cloud providers) should ignore Services that set this field. This field can only be set when creating or updating a Service to type 'LoadBalancer'. Once set, it can not be changed. This field will be wiped when a service is updated to a non 'LoadBalancer' type.
type: string
loadBalancerIP:
description: 'Only applies to Service Type: LoadBalancer LoadBalancer will get created with the IP specified in this field. This feature depends on whether the underlying cloud-provider supports specifying the loadBalancerIP when a load balancer is created. This field will be ignored if the cloud-provider does not support the feature.'
description: 'Only applies to Service Type: LoadBalancer. This feature depends on whether the underlying cloud-provider supports specifying the loadBalancerIP when a load balancer is created. This field will be ignored if the cloud-provider does not support the feature. Deprecated: This field was under-specified and its meaning varies across implementations, and it cannot support dual-stack. As of Kubernetes v1.24, users are encouraged to use implementation-specific annotations when available. This field may be removed in a future API version.'
type: string
loadBalancerSourceRanges:
description: 'If specified and supported by the platform, this will restrict traffic through the cloud-provider load-balancer will be restricted to the specified client IPs. This field will be ignored if the cloud-provider does not support the feature." More info: https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/'
@@ -259,7 +259,7 @@ spec:
description: ServicePort contains information on service's port.
properties:
appProtocol:
description: The application protocol for this port. This field follows standard Kubernetes label syntax. Un-prefixed names are reserved for IANA standard service names (as per RFC-6335 and http://www.iana.org/assignments/service-names). Non-standard protocols should use prefixed names such as mycompany.com/my-custom-protocol.
description: The application protocol for this port. This field follows standard Kubernetes label syntax. Un-prefixed names are reserved for IANA standard service names (as per RFC-6335 and https://www.iana.org/assignments/service-names). Non-standard protocols should use prefixed names such as mycompany.com/my-custom-protocol.
type: string
name:
description: The name of this port within the service. This must be a DNS_LABEL. All ports within a ServiceSpec must have unique names. When considering the endpoints for a Service, this must match the 'name' field in the EndpointPort. Optional if only one ServicePort is defined on this service.
@@ -376,6 +376,13 @@ spec:
- standalone
- fleet
type: string
policyID:
description: PolicyID optionally determines into which Agent Policy this Agent will be enrolled. If left empty the default policy will be used.
type: string
revisionHistoryLimit:
description: RevisionHistoryLimit is the number of revisions to retain to allow rollback in the underlying DaemonSet or Deployment.
format: int32
type: integer
secureSettings:
description: SecureSettings is a list of references to Kubernetes Secrets containing sensitive configuration options for the Agent. Secrets data can be then referenced in the Agent config using the Secret's keys or as specified in `Entries` field of each SecureSetting.
items:
@@ -448,24 +455,18 @@ spec:
storage: true
subresources:
status: {}
status:
acceptedNames:
kind: ""
plural: ""
conditions: []
storedVersions: []
---
# Source: eck-operator-crds/templates/all-crds.yaml
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.8.0
controller-gen.kubebuilder.io/version: v0.9.1
creationTimestamp: null
labels:
app.kubernetes.io/instance: 'elastic-operator'
app.kubernetes.io/name: 'eck-operator-crds'
app.kubernetes.io/version: '2.2.0'
app.kubernetes.io/version: '2.4.0'
name: apmservers.apm.k8s.elastic.co
spec:
group: apm.k8s.elastic.co
@@ -565,7 +566,7 @@ spec:
description: Spec is the specification of the service.
properties:
allocateLoadBalancerNodePorts:
description: allocateLoadBalancerNodePorts defines if NodePorts will be automatically allocated for services with type LoadBalancer. Default is "true". It may be set to "false" if the cluster load-balancer does not rely on NodePorts. If the caller requests specific NodePorts (by specifying a value), those requests will be respected, regardless of this field. This field may only be set for services with type LoadBalancer and will be cleared if the type is changed to any other type. This field is beta-level and is only honored by servers that enable the ServiceLBNodePortControl feature.
description: allocateLoadBalancerNodePorts defines if NodePorts will be automatically allocated for services with type LoadBalancer. Default is "true". It may be set to "false" if the cluster load-balancer does not rely on NodePorts. If the caller requests specific NodePorts (by specifying a value), those requests will be respected, regardless of this field. This field may only be set for services with type LoadBalancer and will be cleared if the type is changed to any other type.
type: boolean
clusterIP:
description: 'clusterIP is the IP address of the service and is usually assigned randomly. If an address is specified manually, is in-range (as per system configuration), and is not in use, it will be allocated to the service; otherwise creation of the service will fail. This field may not be changed through updates unless the type field is also being changed to ExternalName (which requires this field to be blank) or the type field is being changed from ExternalName (in which case this field may optionally be specified, as describe above). Valid values are "None", empty string (""), or a valid IP address. Setting this to "None" makes a "headless service" (no virtual IP), which is useful when direct endpoint connections are preferred and proxying is not required. Only applies to types ClusterIP, NodePort, and LoadBalancer. If this field is specified when creating a Service of type ExternalName, creation will fail. This field will be wiped when updating a Service to type ExternalName. More info: https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies'
@@ -608,7 +609,7 @@ spec:
description: loadBalancerClass is the class of the load balancer implementation this Service belongs to. If specified, the value of this field must be a label-style identifier, with an optional prefix, e.g. "internal-vip" or "example.com/internal-vip". Unprefixed names are reserved for end-users. This field can only be set when the Service type is 'LoadBalancer'. If not set, the default load balancer implementation is used, today this is typically done through the cloud provider integration, but should apply for any default implementation. If set, it is assumed that a load balancer implementation is watching for Services with a matching class. Any default load balancer implementation (e.g. cloud providers) should ignore Services that set this field. This field can only be set when creating or updating a Service to type 'LoadBalancer'. Once set, it can not be changed. This field will be wiped when a service is updated to a non 'LoadBalancer' type.
type: string
loadBalancerIP:
description: 'Only applies to Service Type: LoadBalancer LoadBalancer will get created with the IP specified in this field. This feature depends on whether the underlying cloud-provider supports specifying the loadBalancerIP when a load balancer is created. This field will be ignored if the cloud-provider does not support the feature.'
description: 'Only applies to Service Type: LoadBalancer. This feature depends on whether the underlying cloud-provider supports specifying the loadBalancerIP when a load balancer is created. This field will be ignored if the cloud-provider does not support the feature. Deprecated: This field was under-specified and its meaning varies across implementations, and it cannot support dual-stack. As of Kubernetes v1.24, users are encouraged to use implementation-specific annotations when available. This field may be removed in a future API version.'
type: string
loadBalancerSourceRanges:
description: 'If specified and supported by the platform, this will restrict traffic through the cloud-provider load-balancer will be restricted to the specified client IPs. This field will be ignored if the cloud-provider does not support the feature." More info: https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/'
@@ -621,7 +622,7 @@ spec:
description: ServicePort contains information on service's port.
properties:
appProtocol:
description: The application protocol for this port. This field follows standard Kubernetes label syntax. Un-prefixed names are reserved for IANA standard service names (as per RFC-6335 and http://www.iana.org/assignments/service-names). Non-standard protocols should use prefixed names such as mycompany.com/my-custom-protocol.
description: The application protocol for this port. This field follows standard Kubernetes label syntax. Un-prefixed names are reserved for IANA standard service names (as per RFC-6335 and https://www.iana.org/assignments/service-names). Non-standard protocols should use prefixed names such as mycompany.com/my-custom-protocol.
type: string
name:
description: The name of this port within the service. This must be a DNS_LABEL. All ports within a ServiceSpec must have unique names. When considering the endpoints for a Service, this must match the 'name' field in the EndpointPort. Optional if only one ServicePort is defined on this service.
@@ -736,6 +737,10 @@ spec:
description: PodTemplate provides customisation options (labels, annotations, affinity rules, resource requests, and so on) for the APM Server pods.
type: object
x-kubernetes-preserve-unknown-fields: true
revisionHistoryLimit:
description: RevisionHistoryLimit is the number of revisions to retain to allow rollback in the underlying Deployment.
format: int32
type: integer
secureSettings:
description: SecureSettings is a list of references to Kubernetes secrets containing sensitive configuration options for APM Server.
items:
@@ -792,6 +797,10 @@ spec:
kibanaAssociationStatus:
description: KibanaAssociationStatus is the status of any auto-linking to Kibana.
type: string
observedGeneration:
description: ObservedGeneration represents the .metadata.generation that the status is based upon. It corresponds to the metadata generation, which is updated on mutation by the API Server. If the generation observed in status diverges from the generation in metadata, the APM Server controller has not yet processed the changes contained in the APM Server specification.
format: int64
type: integer
secretTokenSecret:
description: SecretTokenSecretName is the name of the Secret that contains the secret token
type: string
@@ -895,7 +904,7 @@ spec:
description: Spec is the specification of the service.
properties:
allocateLoadBalancerNodePorts:
description: allocateLoadBalancerNodePorts defines if NodePorts will be automatically allocated for services with type LoadBalancer. Default is "true". It may be set to "false" if the cluster load-balancer does not rely on NodePorts. If the caller requests specific NodePorts (by specifying a value), those requests will be respected, regardless of this field. This field may only be set for services with type LoadBalancer and will be cleared if the type is changed to any other type. This field is beta-level and is only honored by servers that enable the ServiceLBNodePortControl feature.
description: allocateLoadBalancerNodePorts defines if NodePorts will be automatically allocated for services with type LoadBalancer. Default is "true". It may be set to "false" if the cluster load-balancer does not rely on NodePorts. If the caller requests specific NodePorts (by specifying a value), those requests will be respected, regardless of this field. This field may only be set for services with type LoadBalancer and will be cleared if the type is changed to any other type.
type: boolean
clusterIP:
description: 'clusterIP is the IP address of the service and is usually assigned randomly. If an address is specified manually, is in-range (as per system configuration), and is not in use, it will be allocated to the service; otherwise creation of the service will fail. This field may not be changed through updates unless the type field is also being changed to ExternalName (which requires this field to be blank) or the type field is being changed from ExternalName (in which case this field may optionally be specified, as describe above). Valid values are "None", empty string (""), or a valid IP address. Setting this to "None" makes a "headless service" (no virtual IP), which is useful when direct endpoint connections are preferred and proxying is not required. Only applies to types ClusterIP, NodePort, and LoadBalancer. If this field is specified when creating a Service of type ExternalName, creation will fail. This field will be wiped when updating a Service to type ExternalName. More info: https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies'
@@ -938,7 +947,7 @@ spec:
description: loadBalancerClass is the class of the load balancer implementation this Service belongs to. If specified, the value of this field must be a label-style identifier, with an optional prefix, e.g. "internal-vip" or "example.com/internal-vip". Unprefixed names are reserved for end-users. This field can only be set when the Service type is 'LoadBalancer'. If not set, the default load balancer implementation is used, today this is typically done through the cloud provider integration, but should apply for any default implementation. If set, it is assumed that a load balancer implementation is watching for Services with a matching class. Any default load balancer implementation (e.g. cloud providers) should ignore Services that set this field. This field can only be set when creating or updating a Service to type 'LoadBalancer'. Once set, it can not be changed. This field will be wiped when a service is updated to a non 'LoadBalancer' type.
type: string
loadBalancerIP:
description: 'Only applies to Service Type: LoadBalancer LoadBalancer will get created with the IP specified in this field. This feature depends on whether the underlying cloud-provider supports specifying the loadBalancerIP when a load balancer is created. This field will be ignored if the cloud-provider does not support the feature.'
description: 'Only applies to Service Type: LoadBalancer. This feature depends on whether the underlying cloud-provider supports specifying the loadBalancerIP when a load balancer is created. This field will be ignored if the cloud-provider does not support the feature. Deprecated: This field was under-specified and its meaning varies across implementations, and it cannot support dual-stack. As of Kubernetes v1.24, users are encouraged to use implementation-specific annotations when available. This field may be removed in a future API version.'
type: string
loadBalancerSourceRanges:
description: 'If specified and supported by the platform, this will restrict traffic through the cloud-provider load-balancer will be restricted to the specified client IPs. This field will be ignored if the cloud-provider does not support the feature." More info: https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/'
@@ -951,7 +960,7 @@ spec:
description: ServicePort contains information on service's port.
properties:
appProtocol:
description: The application protocol for this port. This field follows standard Kubernetes label syntax. Un-prefixed names are reserved for IANA standard service names (as per RFC-6335 and http://www.iana.org/assignments/service-names). Non-standard protocols should use prefixed names such as mycompany.com/my-custom-protocol.
description: The application protocol for this port. This field follows standard Kubernetes label syntax. Un-prefixed names are reserved for IANA standard service names (as per RFC-6335 and https://www.iana.org/assignments/service-names). Non-standard protocols should use prefixed names such as mycompany.com/my-custom-protocol.
type: string
name:
description: The name of this port within the service. This must be a DNS_LABEL. All ports within a ServiceSpec must have unique names. When considering the endpoints for a Service, this must match the 'name' field in the EndpointPort. Optional if only one ServicePort is defined on this service.
@@ -1112,24 +1121,18 @@ spec:
type: object
served: false
storage: false
status:
acceptedNames:
kind: ""
plural: ""
conditions: []
storedVersions: []
---
# Source: eck-operator-crds/templates/all-crds.yaml
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.8.0
controller-gen.kubebuilder.io/version: v0.9.1
creationTimestamp: null
labels:
app.kubernetes.io/instance: 'elastic-operator'
app.kubernetes.io/name: 'eck-operator-crds'
app.kubernetes.io/version: '2.2.0'
app.kubernetes.io/version: '2.4.0'
name: beats.beat.k8s.elastic.co
spec:
group: beat.k8s.elastic.co
@@ -1294,6 +1297,10 @@ spec:
description: ServiceName is the name of an existing Kubernetes service which is used to make requests to the referenced object. It has to be in the same namespace as the referenced resource. If left empty, the default HTTP service of the referenced resource is used.
type: string
type: object
revisionHistoryLimit:
description: RevisionHistoryLimit is the number of revisions to retain to allow rollback in the underlying DaemonSet or Deployment.
format: int32
type: integer
secureSettings:
description: SecureSettings is a list of references to Kubernetes Secrets containing sensitive configuration options for the Beat. Secrets data can be then referenced in the Beat config using the Secret's keys or as specified in `Entries` field of each SecureSetting.
items:
@@ -1353,6 +1360,10 @@ spec:
kibanaAssociationStatus:
description: AssociationStatus is the status of an association resource.
type: string
observedGeneration:
description: ObservedGeneration represents the .metadata.generation that the status is based upon. It corresponds to the metadata generation, which is updated on mutation by the API Server. If the generation observed in status diverges from the generation in metadata, the Beats controller has not yet processed the changes contained in the Beats specification.
format: int64
type: integer
version:
description: 'Version of the stack resource currently running. During version upgrades, multiple versions may run in parallel: this value specifies the lowest version currently running.'
type: string
@@ -1362,24 +1373,18 @@ spec:
storage: true
subresources:
status: {}
status:
acceptedNames:
kind: ""
plural: ""
conditions: []
storedVersions: []
---
# Source: eck-operator-crds/templates/all-crds.yaml
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.8.0
controller-gen.kubebuilder.io/version: v0.9.1
creationTimestamp: null
labels:
app.kubernetes.io/instance: 'elastic-operator'
app.kubernetes.io/name: 'eck-operator-crds'
app.kubernetes.io/version: '2.2.0'
app.kubernetes.io/version: '2.4.0'
name: elasticmapsservers.maps.k8s.elastic.co
spec:
group: maps.k8s.elastic.co
@@ -1486,7 +1491,7 @@ spec:
description: Spec is the specification of the service.
properties:
allocateLoadBalancerNodePorts:
description: allocateLoadBalancerNodePorts defines if NodePorts will be automatically allocated for services with type LoadBalancer. Default is "true". It may be set to "false" if the cluster load-balancer does not rely on NodePorts. If the caller requests specific NodePorts (by specifying a value), those requests will be respected, regardless of this field. This field may only be set for services with type LoadBalancer and will be cleared if the type is changed to any other type. This field is beta-level and is only honored by servers that enable the ServiceLBNodePortControl feature.
description: allocateLoadBalancerNodePorts defines if NodePorts will be automatically allocated for services with type LoadBalancer. Default is "true". It may be set to "false" if the cluster load-balancer does not rely on NodePorts. If the caller requests specific NodePorts (by specifying a value), those requests will be respected, regardless of this field. This field may only be set for services with type LoadBalancer and will be cleared if the type is changed to any other type.
type: boolean
clusterIP:
description: 'clusterIP is the IP address of the service and is usually assigned randomly. If an address is specified manually, is in-range (as per system configuration), and is not in use, it will be allocated to the service; otherwise creation of the service will fail. This field may not be changed through updates unless the type field is also being changed to ExternalName (which requires this field to be blank) or the type field is being changed from ExternalName (in which case this field may optionally be specified, as describe above). Valid values are "None", empty string (""), or a valid IP address. Setting this to "None" makes a "headless service" (no virtual IP), which is useful when direct endpoint connections are preferred and proxying is not required. Only applies to types ClusterIP, NodePort, and LoadBalancer. If this field is specified when creating a Service of type ExternalName, creation will fail. This field will be wiped when updating a Service to type ExternalName. More info: https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies'
@@ -1529,7 +1534,7 @@ spec:
description: loadBalancerClass is the class of the load balancer implementation this Service belongs to. If specified, the value of this field must be a label-style identifier, with an optional prefix, e.g. "internal-vip" or "example.com/internal-vip". Unprefixed names are reserved for end-users. This field can only be set when the Service type is 'LoadBalancer'. If not set, the default load balancer implementation is used, today this is typically done through the cloud provider integration, but should apply for any default implementation. If set, it is assumed that a load balancer implementation is watching for Services with a matching class. Any default load balancer implementation (e.g. cloud providers) should ignore Services that set this field. This field can only be set when creating or updating a Service to type 'LoadBalancer'. Once set, it can not be changed. This field will be wiped when a service is updated to a non 'LoadBalancer' type.
type: string
loadBalancerIP:
description: 'Only applies to Service Type: LoadBalancer LoadBalancer will get created with the IP specified in this field. This feature depends on whether the underlying cloud-provider supports specifying the loadBalancerIP when a load balancer is created. This field will be ignored if the cloud-provider does not support the feature.'
description: 'Only applies to Service Type: LoadBalancer. This feature depends on whether the underlying cloud-provider supports specifying the loadBalancerIP when a load balancer is created. This field will be ignored if the cloud-provider does not support the feature. Deprecated: This field was under-specified and its meaning varies across implementations, and it cannot support dual-stack. As of Kubernetes v1.24, users are encouraged to use implementation-specific annotations when available. This field may be removed in a future API version.'
type: string
loadBalancerSourceRanges:
description: 'If specified and supported by the platform, this will restrict traffic through the cloud-provider load-balancer will be restricted to the specified client IPs. This field will be ignored if the cloud-provider does not support the feature." More info: https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/'
@@ -1542,7 +1547,7 @@ spec:
description: ServicePort contains information on service's port.
properties:
appProtocol:
description: The application protocol for this port. This field follows standard Kubernetes label syntax. Un-prefixed names are reserved for IANA standard service names (as per RFC-6335 and http://www.iana.org/assignments/service-names). Non-standard protocols should use prefixed names such as mycompany.com/my-custom-protocol.
description: The application protocol for this port. This field follows standard Kubernetes label syntax. Un-prefixed names are reserved for IANA standard service names (as per RFC-6335 and https://www.iana.org/assignments/service-names). Non-standard protocols should use prefixed names such as mycompany.com/my-custom-protocol.
type: string
name:
description: The name of this port within the service. This must be a DNS_LABEL. All ports within a ServiceSpec must have unique names. When considering the endpoints for a Service, this must match the 'name' field in the EndpointPort. Optional if only one ServicePort is defined on this service.
@@ -1641,6 +1646,10 @@ spec:
description: PodTemplate provides customisation options (labels, annotations, affinity rules, resource requests, and so on) for the Elastic Maps Server pods
type: object
x-kubernetes-preserve-unknown-fields: true
revisionHistoryLimit:
description: RevisionHistoryLimit is the number of revisions to retain to allow rollback in the underlying Deployment.
format: int32
type: integer
serviceAccountName:
description: ServiceAccountName is used to check access from the current resource to a resource (for ex. Elasticsearch) in a different namespace. Can only be used if ECK is enforcing RBAC on references.
type: string
@@ -1667,6 +1676,10 @@ spec:
health:
description: Health of the deployment.
type: string
observedGeneration:
description: ObservedGeneration is the most recent generation observed for this Elastic Maps Server. It corresponds to the metadata generation, which is updated on mutation by the API Server. If the generation observed in status diverges from the generation in metadata, the Elastic Maps controller has not yet processed the changes contained in the Elastic Maps specification.
format: int64
type: integer
selector:
description: Selector is the label selector used to find all pods.
type: string
@@ -1683,24 +1696,18 @@ spec:
specReplicasPath: .spec.count
statusReplicasPath: .status.count
status: {}
status:
acceptedNames:
kind: ""
plural: ""
conditions: []
storedVersions: []
---
# Source: eck-operator-crds/templates/all-crds.yaml
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.8.0
controller-gen.kubebuilder.io/version: v0.9.1
creationTimestamp: null
labels:
app.kubernetes.io/instance: 'elastic-operator'
app.kubernetes.io/name: 'eck-operator-crds'
app.kubernetes.io/version: '2.2.0'
app.kubernetes.io/version: '2.4.0'
name: elasticsearches.elasticsearch.k8s.elastic.co
spec:
group: elasticsearch.k8s.elastic.co
@@ -1803,7 +1810,7 @@ spec:
description: Spec is the specification of the service.
properties:
allocateLoadBalancerNodePorts:
description: allocateLoadBalancerNodePorts defines if NodePorts will be automatically allocated for services with type LoadBalancer. Default is "true". It may be set to "false" if the cluster load-balancer does not rely on NodePorts. If the caller requests specific NodePorts (by specifying a value), those requests will be respected, regardless of this field. This field may only be set for services with type LoadBalancer and will be cleared if the type is changed to any other type. This field is beta-level and is only honored by servers that enable the ServiceLBNodePortControl feature.
description: allocateLoadBalancerNodePorts defines if NodePorts will be automatically allocated for services with type LoadBalancer. Default is "true". It may be set to "false" if the cluster load-balancer does not rely on NodePorts. If the caller requests specific NodePorts (by specifying a value), those requests will be respected, regardless of this field. This field may only be set for services with type LoadBalancer and will be cleared if the type is changed to any other type.
type: boolean
clusterIP:
description: 'clusterIP is the IP address of the service and is usually assigned randomly. If an address is specified manually, is in-range (as per system configuration), and is not in use, it will be allocated to the service; otherwise creation of the service will fail. This field may not be changed through updates unless the type field is also being changed to ExternalName (which requires this field to be blank) or the type field is being changed from ExternalName (in which case this field may optionally be specified, as describe above). Valid values are "None", empty string (""), or a valid IP address. Setting this to "None" makes a "headless service" (no virtual IP), which is useful when direct endpoint connections are preferred and proxying is not required. Only applies to types ClusterIP, NodePort, and LoadBalancer. If this field is specified when creating a Service of type ExternalName, creation will fail. This field will be wiped when updating a Service to type ExternalName. More info: https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies'
@@ -1846,7 +1853,7 @@ spec:
description: loadBalancerClass is the class of the load balancer implementation this Service belongs to. If specified, the value of this field must be a label-style identifier, with an optional prefix, e.g. "internal-vip" or "example.com/internal-vip". Unprefixed names are reserved for end-users. This field can only be set when the Service type is 'LoadBalancer'. If not set, the default load balancer implementation is used, today this is typically done through the cloud provider integration, but should apply for any default implementation. If set, it is assumed that a load balancer implementation is watching for Services with a matching class. Any default load balancer implementation (e.g. cloud providers) should ignore Services that set this field. This field can only be set when creating or updating a Service to type 'LoadBalancer'. Once set, it can not be changed. This field will be wiped when a service is updated to a non 'LoadBalancer' type.
type: string
loadBalancerIP:
description: 'Only applies to Service Type: LoadBalancer LoadBalancer will get created with the IP specified in this field. This feature depends on whether the underlying cloud-provider supports specifying the loadBalancerIP when a load balancer is created. This field will be ignored if the cloud-provider does not support the feature.'
description: 'Only applies to Service Type: LoadBalancer. This feature depends on whether the underlying cloud-provider supports specifying the loadBalancerIP when a load balancer is created. This field will be ignored if the cloud-provider does not support the feature. Deprecated: This field was under-specified and its meaning varies across implementations, and it cannot support dual-stack. As of Kubernetes v1.24, users are encouraged to use implementation-specific annotations when available. This field may be removed in a future API version.'
type: string
loadBalancerSourceRanges:
description: 'If specified and supported by the platform, this will restrict traffic through the cloud-provider load-balancer will be restricted to the specified client IPs. This field will be ignored if the cloud-provider does not support the feature." More info: https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/'
@@ -1859,7 +1866,7 @@ spec:
description: ServicePort contains information on service's port.
properties:
appProtocol:
description: The application protocol for this port. This field follows standard Kubernetes label syntax. Un-prefixed names are reserved for IANA standard service names (as per RFC-6335 and http://www.iana.org/assignments/service-names). Non-standard protocols should use prefixed names such as mycompany.com/my-custom-protocol.
description: The application protocol for this port. This field follows standard Kubernetes label syntax. Un-prefixed names are reserved for IANA standard service names (as per RFC-6335 and https://www.iana.org/assignments/service-names). Non-standard protocols should use prefixed names such as mycompany.com/my-custom-protocol.
type: string
name:
description: The name of this port within the service. This must be a DNS_LABEL. All ports within a ServiceSpec must have unique names. When considering the endpoints for a Service, this must match the 'name' field in the EndpointPort. Optional if only one ServicePort is defined on this service.
@@ -2058,15 +2065,15 @@ spec:
type: string
type: object
spec:
description: 'Spec defines the desired characteristics of a volume requested by a pod author. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims'
description: 'spec defines the desired characteristics of a volume requested by a pod author. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims'
properties:
accessModes:
description: 'AccessModes contains the desired access modes the volume should have. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1'
description: 'accessModes contains the desired access modes the volume should have. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1'
items:
type: string
type: array
dataSource:
description: 'This field can be used to specify either: * An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) * An existing PVC (PersistentVolumeClaim) If the provisioner or an external controller can support the specified data source, it will create a new volume based on the contents of the specified data source. If the AnyVolumeDataSource feature gate is enabled, this field will always have the same contents as the DataSourceRef field.'
description: 'dataSource field can be used to specify either: * An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) * An existing PVC (PersistentVolumeClaim) If the provisioner or an external controller can support the specified data source, it will create a new volume based on the contents of the specified data source. If the AnyVolumeDataSource feature gate is enabled, this field will always have the same contents as the DataSourceRef field.'
properties:
apiGroup:
description: APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required.
@@ -2081,8 +2088,9 @@ spec:
- kind
- name
type: object
x-kubernetes-map-type: atomic
dataSourceRef:
description: 'Specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any local object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the DataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, both fields (DataSource and DataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. There are two important differences between DataSource and DataSourceRef: * While DataSource only allows two specific types of objects, DataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While DataSource ignores disallowed values (dropping them), DataSourceRef preserves all values, and generates an error if a disallowed value is specified. (Alpha) Using this field requires the AnyVolumeDataSource feature gate to be enabled.'
description: 'dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any local object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the DataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, both fields (DataSource and DataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. There are two important differences between DataSource and DataSourceRef: * While DataSource only allows two specific types of objects, DataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While DataSource ignores disallowed values (dropping them), DataSourceRef preserves all values, and generates an error if a disallowed value is specified. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled.'
properties:
apiGroup:
description: APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required.
@@ -2097,8 +2105,9 @@ spec:
- kind
- name
type: object
x-kubernetes-map-type: atomic
resources:
description: 'Resources represents the minimum resources the volume should have. If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements that are lower than previous value but must still be higher than capacity recorded in the status field of the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources'
description: 'resources represents the minimum resources the volume should have. If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements that are lower than previous value but must still be higher than capacity recorded in the status field of the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources'
properties:
limits:
additionalProperties:
@@ -2120,7 +2129,7 @@ spec:
type: object
type: object
selector:
description: A label query over volumes to consider for binding.
description: selector is a label query over volumes to consider for binding.
properties:
matchExpressions:
description: matchExpressions is a list of label selector requirements. The requirements are ANDed.
@@ -2149,21 +2158,22 @@ spec:
description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
type: object
type: object
x-kubernetes-map-type: atomic
storageClassName:
description: 'Name of the StorageClass required by the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1'
description: 'storageClassName is the name of the StorageClass required by the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1'
type: string
volumeMode:
description: volumeMode defines what type of volume is required by the claim. Value of Filesystem is implied when not included in claim spec.
type: string
volumeName:
description: VolumeName is the binding reference to the PersistentVolume backing this claim.
description: volumeName is the binding reference to the PersistentVolume backing this claim.
type: string
type: object
status:
description: 'Status represents the current information/status of a persistent volume claim. Read-only. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims'
description: 'status represents the current information/status of a persistent volume claim. Read-only. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims'
properties:
accessModes:
description: 'AccessModes contains the actual access modes the volume backing the PVC has. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1'
description: 'accessModes contains the actual access modes the volume backing the PVC has. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1'
items:
type: string
type: array
@@ -2174,7 +2184,7 @@ spec:
- type: string
pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
x-kubernetes-int-or-string: true
description: The storage resource within AllocatedResources tracks the capacity allocated to a PVC. It may be larger than the actual capacity when a volume expansion operation is requested. For storage quota, the larger value from allocatedResources and PVC.spec.resources is used. If allocatedResources is not set, PVC.spec.resources alone is used for quota calculation. If a volume expansion capacity request is lowered, allocatedResources is only lowered if there are no expansion operations in progress and if the actual volume capacity is equal or lower than the requested capacity. This is an alpha field and requires enabling RecoverVolumeExpansionFailure feature.
description: allocatedResources is the storage resource within AllocatedResources tracks the capacity allocated to a PVC. It may be larger than the actual capacity when a volume expansion operation is requested. For storage quota, the larger value from allocatedResources and PVC.spec.resources is used. If allocatedResources is not set, PVC.spec.resources alone is used for quota calculation. If a volume expansion capacity request is lowered, allocatedResources is only lowered if there are no expansion operations in progress and if the actual volume capacity is equal or lower than the requested capacity. This is an alpha field and requires enabling RecoverVolumeExpansionFailure feature.
type: object
capacity:
additionalProperties:
@@ -2183,26 +2193,26 @@ spec:
- type: string
pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
x-kubernetes-int-or-string: true
description: Represents the actual resources of the underlying volume.
description: capacity represents the actual resources of the underlying volume.
type: object
conditions:
description: Current Condition of persistent volume claim. If underlying persistent volume is being resized then the Condition will be set to 'ResizeStarted'.
description: conditions is the current Condition of persistent volume claim. If underlying persistent volume is being resized then the Condition will be set to 'ResizeStarted'.
items:
description: PersistentVolumeClaimCondition contails details about state of pvc
properties:
lastProbeTime:
description: Last time we probed the condition.
description: lastProbeTime is the time we probed the condition.
format: date-time
type: string
lastTransitionTime:
description: Last time the condition transitioned from one status to another.
description: lastTransitionTime is the time the condition transitioned from one status to another.
format: date-time
type: string
message:
description: Human-readable message indicating details about last transition.
description: message is the human-readable message indicating details about last transition.
type: string
reason:
description: Unique, this should be a short, machine understandable string that gives the reason for condition's last transition. If it reports "ResizeStarted" that means the underlying persistent volume is being resized.
description: reason is a unique, this should be a short, machine understandable string that gives the reason for condition's last transition. If it reports "ResizeStarted" that means the underlying persistent volume is being resized.
type: string
status:
type: string
@@ -2215,10 +2225,10 @@ spec:
type: object
type: array
phase:
description: Phase represents the current phase of PersistentVolumeClaim.
description: phase represents the current phase of PersistentVolumeClaim.
type: string
resizeStatus:
description: ResizeStatus stores status of resize operation. ResizeStatus is not set by default but when expansion is complete resizeStatus is set to empty string by resize controller or kubelet. This is an alpha field and requires enabling RecoverVolumeExpansionFailure feature.
description: resizeStatus stores status of resize operation. ResizeStatus is not set by default but when expansion is complete resizeStatus is set to empty string by resize controller or kubelet. This is an alpha field and requires enabling RecoverVolumeExpansionFailure feature.
type: string
type: object
type: object
@@ -2267,7 +2277,7 @@ spec:
description: An eviction is allowed if at least "minAvailable" pods selected by "selector" will still be available after the eviction, i.e. even in the absence of the evicted pod. So for example you can prevent all voluntary evictions by specifying "100%".
x-kubernetes-int-or-string: true
selector:
description: Label query over pods whose evictions are managed by the disruption budget. A null selector selects no pods. An empty selector ({}) also selects no pods, which differs from standard behavior of selecting all pods. In policy/v1, an empty selector will select all pods in the namespace.
description: Label query over pods whose evictions are managed by the disruption budget. A null selector will match no pods, while an empty ({}) selector will select all pods within the namespace.
properties:
matchExpressions:
description: matchExpressions is a list of label selector requirements. The requirements are ANDed.
@@ -2296,6 +2306,7 @@ spec:
description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
type: object
type: object
x-kubernetes-map-type: atomic
type: object
type: object
remoteClusters:
@@ -2324,6 +2335,10 @@ spec:
- name
type: object
type: array
revisionHistoryLimit:
description: RevisionHistoryLimit is the number of revisions to retain to allow rollback in the underlying StatefulSets.
format: int32
type: integer
secureSettings:
description: SecureSettings is a list of references to Kubernetes secrets containing sensitive configuration options for Elasticsearch.
items:
@@ -2384,7 +2399,7 @@ spec:
description: Spec is the specification of the service.
properties:
allocateLoadBalancerNodePorts:
description: allocateLoadBalancerNodePorts defines if NodePorts will be automatically allocated for services with type LoadBalancer. Default is "true". It may be set to "false" if the cluster load-balancer does not rely on NodePorts. If the caller requests specific NodePorts (by specifying a value), those requests will be respected, regardless of this field. This field may only be set for services with type LoadBalancer and will be cleared if the type is changed to any other type. This field is beta-level and is only honored by servers that enable the ServiceLBNodePortControl feature.
description: allocateLoadBalancerNodePorts defines if NodePorts will be automatically allocated for services with type LoadBalancer. Default is "true". It may be set to "false" if the cluster load-balancer does not rely on NodePorts. If the caller requests specific NodePorts (by specifying a value), those requests will be respected, regardless of this field. This field may only be set for services with type LoadBalancer and will be cleared if the type is changed to any other type.
type: boolean
clusterIP:
description: 'clusterIP is the IP address of the service and is usually assigned randomly. If an address is specified manually, is in-range (as per system configuration), and is not in use, it will be allocated to the service; otherwise creation of the service will fail. This field may not be changed through updates unless the type field is also being changed to ExternalName (which requires this field to be blank) or the type field is being changed from ExternalName (in which case this field may optionally be specified, as describe above). Valid values are "None", empty string (""), or a valid IP address. Setting this to "None" makes a "headless service" (no virtual IP), which is useful when direct endpoint connections are preferred and proxying is not required. Only applies to types ClusterIP, NodePort, and LoadBalancer. If this field is specified when creating a Service of type ExternalName, creation will fail. This field will be wiped when updating a Service to type ExternalName. More info: https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies'
@@ -2427,7 +2442,7 @@ spec:
description: loadBalancerClass is the class of the load balancer implementation this Service belongs to. If specified, the value of this field must be a label-style identifier, with an optional prefix, e.g. "internal-vip" or "example.com/internal-vip". Unprefixed names are reserved for end-users. This field can only be set when the Service type is 'LoadBalancer'. If not set, the default load balancer implementation is used, today this is typically done through the cloud provider integration, but should apply for any default implementation. If set, it is assumed that a load balancer implementation is watching for Services with a matching class. Any default load balancer implementation (e.g. cloud providers) should ignore Services that set this field. This field can only be set when creating or updating a Service to type 'LoadBalancer'. Once set, it can not be changed. This field will be wiped when a service is updated to a non 'LoadBalancer' type.
type: string
loadBalancerIP:
description: 'Only applies to Service Type: LoadBalancer LoadBalancer will get created with the IP specified in this field. This feature depends on whether the underlying cloud-provider supports specifying the loadBalancerIP when a load balancer is created. This field will be ignored if the cloud-provider does not support the feature.'
description: 'Only applies to Service Type: LoadBalancer. This feature depends on whether the underlying cloud-provider supports specifying the loadBalancerIP when a load balancer is created. This field will be ignored if the cloud-provider does not support the feature. Deprecated: This field was under-specified and its meaning varies across implementations, and it cannot support dual-stack. As of Kubernetes v1.24, users are encouraged to use implementation-specific annotations when available. This field may be removed in a future API version.'
type: string
loadBalancerSourceRanges:
description: 'If specified and supported by the platform, this will restrict traffic through the cloud-provider load-balancer will be restricted to the specified client IPs. This field will be ignored if the cloud-provider does not support the feature." More info: https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/'
@@ -2440,7 +2455,7 @@ spec:
description: ServicePort contains information on service's port.
properties:
appProtocol:
description: The application protocol for this port. This field follows standard Kubernetes label syntax. Un-prefixed names are reserved for IANA standard service names (as per RFC-6335 and http://www.iana.org/assignments/service-names). Non-standard protocols should use prefixed names such as mycompany.com/my-custom-protocol.
description: The application protocol for this port. This field follows standard Kubernetes label syntax. Un-prefixed names are reserved for IANA standard service names (as per RFC-6335 and https://www.iana.org/assignments/service-names). Non-standard protocols should use prefixed names such as mycompany.com/my-custom-protocol.
type: string
name:
description: The name of this port within the service. This must be a DNS_LABEL. All ports within a ServiceSpec must have unique names. When considering the endpoints for a Service, this must match the 'name' field in the EndpointPort. Optional if only one ServicePort is defined on this service.
@@ -2764,7 +2779,7 @@ spec:
description: Spec is the specification of the service.
properties:
allocateLoadBalancerNodePorts:
description: allocateLoadBalancerNodePorts defines if NodePorts will be automatically allocated for services with type LoadBalancer. Default is "true". It may be set to "false" if the cluster load-balancer does not rely on NodePorts. If the caller requests specific NodePorts (by specifying a value), those requests will be respected, regardless of this field. This field may only be set for services with type LoadBalancer and will be cleared if the type is changed to any other type. This field is beta-level and is only honored by servers that enable the ServiceLBNodePortControl feature.
description: allocateLoadBalancerNodePorts defines if NodePorts will be automatically allocated for services with type LoadBalancer. Default is "true". It may be set to "false" if the cluster load-balancer does not rely on NodePorts. If the caller requests specific NodePorts (by specifying a value), those requests will be respected, regardless of this field. This field may only be set for services with type LoadBalancer and will be cleared if the type is changed to any other type.
type: boolean
clusterIP:
description: 'clusterIP is the IP address of the service and is usually assigned randomly. If an address is specified manually, is in-range (as per system configuration), and is not in use, it will be allocated to the service; otherwise creation of the service will fail. This field may not be changed through updates unless the type field is also being changed to ExternalName (which requires this field to be blank) or the type field is being changed from ExternalName (in which case this field may optionally be specified, as describe above). Valid values are "None", empty string (""), or a valid IP address. Setting this to "None" makes a "headless service" (no virtual IP), which is useful when direct endpoint connections are preferred and proxying is not required. Only applies to types ClusterIP, NodePort, and LoadBalancer. If this field is specified when creating a Service of type ExternalName, creation will fail. This field will be wiped when updating a Service to type ExternalName. More info: https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies'
@@ -2807,7 +2822,7 @@ spec:
description: loadBalancerClass is the class of the load balancer implementation this Service belongs to. If specified, the value of this field must be a label-style identifier, with an optional prefix, e.g. "internal-vip" or "example.com/internal-vip". Unprefixed names are reserved for end-users. This field can only be set when the Service type is 'LoadBalancer'. If not set, the default load balancer implementation is used, today this is typically done through the cloud provider integration, but should apply for any default implementation. If set, it is assumed that a load balancer implementation is watching for Services with a matching class. Any default load balancer implementation (e.g. cloud providers) should ignore Services that set this field. This field can only be set when creating or updating a Service to type 'LoadBalancer'. Once set, it can not be changed. This field will be wiped when a service is updated to a non 'LoadBalancer' type.
type: string
loadBalancerIP:
description: 'Only applies to Service Type: LoadBalancer LoadBalancer will get created with the IP specified in this field. This feature depends on whether the underlying cloud-provider supports specifying the loadBalancerIP when a load balancer is created. This field will be ignored if the cloud-provider does not support the feature.'
description: 'Only applies to Service Type: LoadBalancer. This feature depends on whether the underlying cloud-provider supports specifying the loadBalancerIP when a load balancer is created. This field will be ignored if the cloud-provider does not support the feature. Deprecated: This field was under-specified and its meaning varies across implementations, and it cannot support dual-stack. As of Kubernetes v1.24, users are encouraged to use implementation-specific annotations when available. This field may be removed in a future API version.'
type: string
loadBalancerSourceRanges:
description: 'If specified and supported by the platform, this will restrict traffic through the cloud-provider load-balancer will be restricted to the specified client IPs. This field will be ignored if the cloud-provider does not support the feature." More info: https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/'
@@ -2820,7 +2835,7 @@ spec:
description: ServicePort contains information on service's port.
properties:
appProtocol:
description: The application protocol for this port. This field follows standard Kubernetes label syntax. Un-prefixed names are reserved for IANA standard service names (as per RFC-6335 and http://www.iana.org/assignments/service-names). Non-standard protocols should use prefixed names such as mycompany.com/my-custom-protocol.
description: The application protocol for this port. This field follows standard Kubernetes label syntax. Un-prefixed names are reserved for IANA standard service names (as per RFC-6335 and https://www.iana.org/assignments/service-names). Non-standard protocols should use prefixed names such as mycompany.com/my-custom-protocol.
type: string
name:
description: The name of this port within the service. This must be a DNS_LABEL. All ports within a ServiceSpec must have unique names. When considering the endpoints for a Service, this must match the 'name' field in the EndpointPort. Optional if only one ServicePort is defined on this service.
@@ -2968,15 +2983,15 @@ spec:
type: string
type: object
spec:
description: 'Spec defines the desired characteristics of a volume requested by a pod author. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims'
description: 'spec defines the desired characteristics of a volume requested by a pod author. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims'
properties:
accessModes:
description: 'AccessModes contains the desired access modes the volume should have. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1'
description: 'accessModes contains the desired access modes the volume should have. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1'
items:
type: string
type: array
dataSource:
description: 'This field can be used to specify either: * An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) * An existing PVC (PersistentVolumeClaim) If the provisioner or an external controller can support the specified data source, it will create a new volume based on the contents of the specified data source. If the AnyVolumeDataSource feature gate is enabled, this field will always have the same contents as the DataSourceRef field.'
description: 'dataSource field can be used to specify either: * An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) * An existing PVC (PersistentVolumeClaim) If the provisioner or an external controller can support the specified data source, it will create a new volume based on the contents of the specified data source. If the AnyVolumeDataSource feature gate is enabled, this field will always have the same contents as the DataSourceRef field.'
properties:
apiGroup:
description: APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required.
@@ -2991,8 +3006,9 @@ spec:
- kind
- name
type: object
x-kubernetes-map-type: atomic
dataSourceRef:
description: 'Specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any local object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the DataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, both fields (DataSource and DataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. There are two important differences between DataSource and DataSourceRef: * While DataSource only allows two specific types of objects, DataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While DataSource ignores disallowed values (dropping them), DataSourceRef preserves all values, and generates an error if a disallowed value is specified. (Alpha) Using this field requires the AnyVolumeDataSource feature gate to be enabled.'
description: 'dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any local object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the DataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, both fields (DataSource and DataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. There are two important differences between DataSource and DataSourceRef: * While DataSource only allows two specific types of objects, DataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While DataSource ignores disallowed values (dropping them), DataSourceRef preserves all values, and generates an error if a disallowed value is specified. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled.'
properties:
apiGroup:
description: APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required.
@@ -3007,8 +3023,9 @@ spec:
- kind
- name
type: object
x-kubernetes-map-type: atomic
resources:
description: 'Resources represents the minimum resources the volume should have. If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements that are lower than previous value but must still be higher than capacity recorded in the status field of the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources'
description: 'resources represents the minimum resources the volume should have. If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements that are lower than previous value but must still be higher than capacity recorded in the status field of the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources'
properties:
limits:
additionalProperties:
@@ -3030,7 +3047,7 @@ spec:
type: object
type: object
selector:
description: A label query over volumes to consider for binding.
description: selector is a label query over volumes to consider for binding.
properties:
matchExpressions:
description: matchExpressions is a list of label selector requirements. The requirements are ANDed.
@@ -3059,21 +3076,22 @@ spec:
description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
type: object
type: object
x-kubernetes-map-type: atomic
storageClassName:
description: 'Name of the StorageClass required by the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1'
description: 'storageClassName is the name of the StorageClass required by the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1'
type: string
volumeMode:
description: volumeMode defines what type of volume is required by the claim. Value of Filesystem is implied when not included in claim spec.
type: string
volumeName:
description: VolumeName is the binding reference to the PersistentVolume backing this claim.
description: volumeName is the binding reference to the PersistentVolume backing this claim.
type: string
type: object
status:
description: 'Status represents the current information/status of a persistent volume claim. Read-only. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims'
description: 'status represents the current information/status of a persistent volume claim. Read-only. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims'
properties:
accessModes:
description: 'AccessModes contains the actual access modes the volume backing the PVC has. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1'
description: 'accessModes contains the actual access modes the volume backing the PVC has. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1'
items:
type: string
type: array
@@ -3084,7 +3102,7 @@ spec:
- type: string
pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
x-kubernetes-int-or-string: true
description: The storage resource within AllocatedResources tracks the capacity allocated to a PVC. It may be larger than the actual capacity when a volume expansion operation is requested. For storage quota, the larger value from allocatedResources and PVC.spec.resources is used. If allocatedResources is not set, PVC.spec.resources alone is used for quota calculation. If a volume expansion capacity request is lowered, allocatedResources is only lowered if there are no expansion operations in progress and if the actual volume capacity is equal or lower than the requested capacity. This is an alpha field and requires enabling RecoverVolumeExpansionFailure feature.
description: allocatedResources is the storage resource within AllocatedResources tracks the capacity allocated to a PVC. It may be larger than the actual capacity when a volume expansion operation is requested. For storage quota, the larger value from allocatedResources and PVC.spec.resources is used. If allocatedResources is not set, PVC.spec.resources alone is used for quota calculation. If a volume expansion capacity request is lowered, allocatedResources is only lowered if there are no expansion operations in progress and if the actual volume capacity is equal or lower than the requested capacity. This is an alpha field and requires enabling RecoverVolumeExpansionFailure feature.
type: object
capacity:
additionalProperties:
@@ -3093,26 +3111,26 @@ spec:
- type: string
pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
x-kubernetes-int-or-string: true
description: Represents the actual resources of the underlying volume.
description: capacity represents the actual resources of the underlying volume.
type: object
conditions:
description: Current Condition of persistent volume claim. If underlying persistent volume is being resized then the Condition will be set to 'ResizeStarted'.
description: conditions is the current Condition of persistent volume claim. If underlying persistent volume is being resized then the Condition will be set to 'ResizeStarted'.
items:
description: PersistentVolumeClaimCondition contails details about state of pvc
properties:
lastProbeTime:
description: Last time we probed the condition.
description: lastProbeTime is the time we probed the condition.
format: date-time
type: string
lastTransitionTime:
description: Last time the condition transitioned from one status to another.
description: lastTransitionTime is the time the condition transitioned from one status to another.
format: date-time
type: string
message:
description: Human-readable message indicating details about last transition.
description: message is the human-readable message indicating details about last transition.
type: string
reason:
description: Unique, this should be a short, machine understandable string that gives the reason for condition's last transition. If it reports "ResizeStarted" that means the underlying persistent volume is being resized.
description: reason is a unique, this should be a short, machine understandable string that gives the reason for condition's last transition. If it reports "ResizeStarted" that means the underlying persistent volume is being resized.
type: string
status:
type: string
@@ -3125,10 +3143,10 @@ spec:
type: object
type: array
phase:
description: Phase represents the current phase of PersistentVolumeClaim.
description: phase represents the current phase of PersistentVolumeClaim.
type: string
resizeStatus:
description: ResizeStatus stores status of resize operation. ResizeStatus is not set by default but when expansion is complete resizeStatus is set to empty string by resize controller or kubelet. This is an alpha field and requires enabling RecoverVolumeExpansionFailure feature.
description: resizeStatus stores status of resize operation. ResizeStatus is not set by default but when expansion is complete resizeStatus is set to empty string by resize controller or kubelet. This is an alpha field and requires enabling RecoverVolumeExpansionFailure feature.
type: string
type: object
type: object
@@ -3207,6 +3225,7 @@ spec:
description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
type: object
type: object
x-kubernetes-map-type: atomic
type: object
type: object
secureSettings:
@@ -3283,24 +3302,18 @@ spec:
type: object
served: false
storage: false
status:
acceptedNames:
kind: ""
plural: ""
conditions: []
storedVersions: []
---
# Source: eck-operator-crds/templates/all-crds.yaml
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.8.0
controller-gen.kubebuilder.io/version: v0.9.1
creationTimestamp: null
labels:
app.kubernetes.io/instance: 'elastic-operator'
app.kubernetes.io/name: 'eck-operator-crds'
app.kubernetes.io/version: '2.2.0'
app.kubernetes.io/version: '2.4.0'
name: enterprisesearches.enterprisesearch.k8s.elastic.co
spec:
group: enterprisesearch.k8s.elastic.co
@@ -3407,7 +3420,7 @@ spec:
description: Spec is the specification of the service.
properties:
allocateLoadBalancerNodePorts:
description: allocateLoadBalancerNodePorts defines if NodePorts will be automatically allocated for services with type LoadBalancer. Default is "true". It may be set to "false" if the cluster load-balancer does not rely on NodePorts. If the caller requests specific NodePorts (by specifying a value), those requests will be respected, regardless of this field. This field may only be set for services with type LoadBalancer and will be cleared if the type is changed to any other type. This field is beta-level and is only honored by servers that enable the ServiceLBNodePortControl feature.
description: allocateLoadBalancerNodePorts defines if NodePorts will be automatically allocated for services with type LoadBalancer. Default is "true". It may be set to "false" if the cluster load-balancer does not rely on NodePorts. If the caller requests specific NodePorts (by specifying a value), those requests will be respected, regardless of this field. This field may only be set for services with type LoadBalancer and will be cleared if the type is changed to any other type.
type: boolean
clusterIP:
description: 'clusterIP is the IP address of the service and is usually assigned randomly. If an address is specified manually, is in-range (as per system configuration), and is not in use, it will be allocated to the service; otherwise creation of the service will fail. This field may not be changed through updates unless the type field is also being changed to ExternalName (which requires this field to be blank) or the type field is being changed from ExternalName (in which case this field may optionally be specified, as describe above). Valid values are "None", empty string (""), or a valid IP address. Setting this to "None" makes a "headless service" (no virtual IP), which is useful when direct endpoint connections are preferred and proxying is not required. Only applies to types ClusterIP, NodePort, and LoadBalancer. If this field is specified when creating a Service of type ExternalName, creation will fail. This field will be wiped when updating a Service to type ExternalName. More info: https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies'
@@ -3450,7 +3463,7 @@ spec:
description: loadBalancerClass is the class of the load balancer implementation this Service belongs to. If specified, the value of this field must be a label-style identifier, with an optional prefix, e.g. "internal-vip" or "example.com/internal-vip". Unprefixed names are reserved for end-users. This field can only be set when the Service type is 'LoadBalancer'. If not set, the default load balancer implementation is used, today this is typically done through the cloud provider integration, but should apply for any default implementation. If set, it is assumed that a load balancer implementation is watching for Services with a matching class. Any default load balancer implementation (e.g. cloud providers) should ignore Services that set this field. This field can only be set when creating or updating a Service to type 'LoadBalancer'. Once set, it can not be changed. This field will be wiped when a service is updated to a non 'LoadBalancer' type.
type: string
loadBalancerIP:
description: 'Only applies to Service Type: LoadBalancer LoadBalancer will get created with the IP specified in this field. This feature depends on whether the underlying cloud-provider supports specifying the loadBalancerIP when a load balancer is created. This field will be ignored if the cloud-provider does not support the feature.'
description: 'Only applies to Service Type: LoadBalancer. This feature depends on whether the underlying cloud-provider supports specifying the loadBalancerIP when a load balancer is created. This field will be ignored if the cloud-provider does not support the feature. Deprecated: This field was under-specified and its meaning varies across implementations, and it cannot support dual-stack. As of Kubernetes v1.24, users are encouraged to use implementation-specific annotations when available. This field may be removed in a future API version.'
type: string
loadBalancerSourceRanges:
description: 'If specified and supported by the platform, this will restrict traffic through the cloud-provider load-balancer will be restricted to the specified client IPs. This field will be ignored if the cloud-provider does not support the feature." More info: https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/'
@@ -3463,7 +3476,7 @@ spec:
description: ServicePort contains information on service's port.
properties:
appProtocol:
description: The application protocol for this port. This field follows standard Kubernetes label syntax. Un-prefixed names are reserved for IANA standard service names (as per RFC-6335 and http://www.iana.org/assignments/service-names). Non-standard protocols should use prefixed names such as mycompany.com/my-custom-protocol.
description: The application protocol for this port. This field follows standard Kubernetes label syntax. Un-prefixed names are reserved for IANA standard service names (as per RFC-6335 and https://www.iana.org/assignments/service-names). Non-standard protocols should use prefixed names such as mycompany.com/my-custom-protocol.
type: string
name:
description: The name of this port within the service. This must be a DNS_LABEL. All ports within a ServiceSpec must have unique names. When considering the endpoints for a Service, this must match the 'name' field in the EndpointPort. Optional if only one ServicePort is defined on this service.
@@ -3562,6 +3575,10 @@ spec:
description: PodTemplate provides customisation options (labels, annotations, affinity rules, resource requests, and so on) for the Enterprise Search pods.
type: object
x-kubernetes-preserve-unknown-fields: true
revisionHistoryLimit:
description: RevisionHistoryLimit is the number of revisions to retain to allow rollback in the underlying Deployment.
format: int32
type: integer
serviceAccountName:
description: ServiceAccountName is used to check access from the current resource to a resource (for ex. Elasticsearch) in a different namespace. Can only be used if ECK is enforcing RBAC on references.
type: string
@@ -3586,6 +3603,10 @@ spec:
health:
description: Health of the deployment.
type: string
observedGeneration:
description: ObservedGeneration represents the .metadata.generation that the status is based upon. It corresponds to the metadata generation, which is updated on mutation by the API Server. If the generation observed in status diverges from the generation in metadata, the Enterprise Search controller has not yet processed the changes contained in the Enterprise Search specification.
format: int64
type: integer
selector:
description: Selector is the label selector used to find all pods.
type: string
@@ -3697,7 +3718,7 @@ spec:
description: Spec is the specification of the service.
properties:
allocateLoadBalancerNodePorts:
description: allocateLoadBalancerNodePorts defines if NodePorts will be automatically allocated for services with type LoadBalancer. Default is "true". It may be set to "false" if the cluster load-balancer does not rely on NodePorts. If the caller requests specific NodePorts (by specifying a value), those requests will be respected, regardless of this field. This field may only be set for services with type LoadBalancer and will be cleared if the type is changed to any other type. This field is beta-level and is only honored by servers that enable the ServiceLBNodePortControl feature.
description: allocateLoadBalancerNodePorts defines if NodePorts will be automatically allocated for services with type LoadBalancer. Default is "true". It may be set to "false" if the cluster load-balancer does not rely on NodePorts. If the caller requests specific NodePorts (by specifying a value), those requests will be respected, regardless of this field. This field may only be set for services with type LoadBalancer and will be cleared if the type is changed to any other type.
type: boolean
clusterIP:
description: 'clusterIP is the IP address of the service and is usually assigned randomly. If an address is specified manually, is in-range (as per system configuration), and is not in use, it will be allocated to the service; otherwise creation of the service will fail. This field may not be changed through updates unless the type field is also being changed to ExternalName (which requires this field to be blank) or the type field is being changed from ExternalName (in which case this field may optionally be specified, as describe above). Valid values are "None", empty string (""), or a valid IP address. Setting this to "None" makes a "headless service" (no virtual IP), which is useful when direct endpoint connections are preferred and proxying is not required. Only applies to types ClusterIP, NodePort, and LoadBalancer. If this field is specified when creating a Service of type ExternalName, creation will fail. This field will be wiped when updating a Service to type ExternalName. More info: https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies'
@@ -3740,7 +3761,7 @@ spec:
description: loadBalancerClass is the class of the load balancer implementation this Service belongs to. If specified, the value of this field must be a label-style identifier, with an optional prefix, e.g. "internal-vip" or "example.com/internal-vip". Unprefixed names are reserved for end-users. This field can only be set when the Service type is 'LoadBalancer'. If not set, the default load balancer implementation is used, today this is typically done through the cloud provider integration, but should apply for any default implementation. If set, it is assumed that a load balancer implementation is watching for Services with a matching class. Any default load balancer implementation (e.g. cloud providers) should ignore Services that set this field. This field can only be set when creating or updating a Service to type 'LoadBalancer'. Once set, it can not be changed. This field will be wiped when a service is updated to a non 'LoadBalancer' type.
type: string
loadBalancerIP:
description: 'Only applies to Service Type: LoadBalancer LoadBalancer will get created with the IP specified in this field. This feature depends on whether the underlying cloud-provider supports specifying the loadBalancerIP when a load balancer is created. This field will be ignored if the cloud-provider does not support the feature.'
description: 'Only applies to Service Type: LoadBalancer. This feature depends on whether the underlying cloud-provider supports specifying the loadBalancerIP when a load balancer is created. This field will be ignored if the cloud-provider does not support the feature. Deprecated: This field was under-specified and its meaning varies across implementations, and it cannot support dual-stack. As of Kubernetes v1.24, users are encouraged to use implementation-specific annotations when available. This field may be removed in a future API version.'
type: string
loadBalancerSourceRanges:
description: 'If specified and supported by the platform, this will restrict traffic through the cloud-provider load-balancer will be restricted to the specified client IPs. This field will be ignored if the cloud-provider does not support the feature." More info: https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/'
@@ -3753,7 +3774,7 @@ spec:
description: ServicePort contains information on service's port.
properties:
appProtocol:
description: The application protocol for this port. This field follows standard Kubernetes label syntax. Un-prefixed names are reserved for IANA standard service names (as per RFC-6335 and http://www.iana.org/assignments/service-names). Non-standard protocols should use prefixed names such as mycompany.com/my-custom-protocol.
description: The application protocol for this port. This field follows standard Kubernetes label syntax. Un-prefixed names are reserved for IANA standard service names (as per RFC-6335 and https://www.iana.org/assignments/service-names). Non-standard protocols should use prefixed names such as mycompany.com/my-custom-protocol.
type: string
name:
description: The name of this port within the service. This must be a DNS_LABEL. All ports within a ServiceSpec must have unique names. When considering the endpoints for a Service, this must match the 'name' field in the EndpointPort. Optional if only one ServicePort is defined on this service.
@@ -3891,24 +3912,18 @@ spec:
storage: false
subresources:
status: {}
status:
acceptedNames:
kind: ""
plural: ""
conditions: []
storedVersions: []
---
# Source: eck-operator-crds/templates/all-crds.yaml
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.8.0
controller-gen.kubebuilder.io/version: v0.9.1
creationTimestamp: null
labels:
app.kubernetes.io/instance: 'elastic-operator'
app.kubernetes.io/name: 'eck-operator-crds'
app.kubernetes.io/version: '2.2.0'
app.kubernetes.io/version: '2.4.0'
name: kibanas.kibana.k8s.elastic.co
spec:
group: kibana.k8s.elastic.co
@@ -4024,7 +4039,7 @@ spec:
description: Spec is the specification of the service.
properties:
allocateLoadBalancerNodePorts:
description: allocateLoadBalancerNodePorts defines if NodePorts will be automatically allocated for services with type LoadBalancer. Default is "true". It may be set to "false" if the cluster load-balancer does not rely on NodePorts. If the caller requests specific NodePorts (by specifying a value), those requests will be respected, regardless of this field. This field may only be set for services with type LoadBalancer and will be cleared if the type is changed to any other type. This field is beta-level and is only honored by servers that enable the ServiceLBNodePortControl feature.
description: allocateLoadBalancerNodePorts defines if NodePorts will be automatically allocated for services with type LoadBalancer. Default is "true". It may be set to "false" if the cluster load-balancer does not rely on NodePorts. If the caller requests specific NodePorts (by specifying a value), those requests will be respected, regardless of this field. This field may only be set for services with type LoadBalancer and will be cleared if the type is changed to any other type.
type: boolean
clusterIP:
description: 'clusterIP is the IP address of the service and is usually assigned randomly. If an address is specified manually, is in-range (as per system configuration), and is not in use, it will be allocated to the service; otherwise creation of the service will fail. This field may not be changed through updates unless the type field is also being changed to ExternalName (which requires this field to be blank) or the type field is being changed from ExternalName (in which case this field may optionally be specified, as describe above). Valid values are "None", empty string (""), or a valid IP address. Setting this to "None" makes a "headless service" (no virtual IP), which is useful when direct endpoint connections are preferred and proxying is not required. Only applies to types ClusterIP, NodePort, and LoadBalancer. If this field is specified when creating a Service of type ExternalName, creation will fail. This field will be wiped when updating a Service to type ExternalName. More info: https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies'
@@ -4067,7 +4082,7 @@ spec:
description: loadBalancerClass is the class of the load balancer implementation this Service belongs to. If specified, the value of this field must be a label-style identifier, with an optional prefix, e.g. "internal-vip" or "example.com/internal-vip". Unprefixed names are reserved for end-users. This field can only be set when the Service type is 'LoadBalancer'. If not set, the default load balancer implementation is used, today this is typically done through the cloud provider integration, but should apply for any default implementation. If set, it is assumed that a load balancer implementation is watching for Services with a matching class. Any default load balancer implementation (e.g. cloud providers) should ignore Services that set this field. This field can only be set when creating or updating a Service to type 'LoadBalancer'. Once set, it can not be changed. This field will be wiped when a service is updated to a non 'LoadBalancer' type.
type: string
loadBalancerIP:
description: 'Only applies to Service Type: LoadBalancer LoadBalancer will get created with the IP specified in this field. This feature depends on whether the underlying cloud-provider supports specifying the loadBalancerIP when a load balancer is created. This field will be ignored if the cloud-provider does not support the feature.'
description: 'Only applies to Service Type: LoadBalancer. This feature depends on whether the underlying cloud-provider supports specifying the loadBalancerIP when a load balancer is created. This field will be ignored if the cloud-provider does not support the feature. Deprecated: This field was under-specified and its meaning varies across implementations, and it cannot support dual-stack. As of Kubernetes v1.24, users are encouraged to use implementation-specific annotations when available. This field may be removed in a future API version.'
type: string
loadBalancerSourceRanges:
description: 'If specified and supported by the platform, this will restrict traffic through the cloud-provider load-balancer will be restricted to the specified client IPs. This field will be ignored if the cloud-provider does not support the feature." More info: https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/'
@@ -4080,7 +4095,7 @@ spec:
description: ServicePort contains information on service's port.
properties:
appProtocol:
description: The application protocol for this port. This field follows standard Kubernetes label syntax. Un-prefixed names are reserved for IANA standard service names (as per RFC-6335 and http://www.iana.org/assignments/service-names). Non-standard protocols should use prefixed names such as mycompany.com/my-custom-protocol.
description: The application protocol for this port. This field follows standard Kubernetes label syntax. Un-prefixed names are reserved for IANA standard service names (as per RFC-6335 and https://www.iana.org/assignments/service-names). Non-standard protocols should use prefixed names such as mycompany.com/my-custom-protocol.
type: string
name:
description: The name of this port within the service. This must be a DNS_LABEL. All ports within a ServiceSpec must have unique names. When considering the endpoints for a Service, this must match the 'name' field in the EndpointPort. Optional if only one ServicePort is defined on this service.
@@ -4229,6 +4244,10 @@ spec:
description: PodTemplate provides customisation options (labels, annotations, affinity rules, resource requests, and so on) for the Kibana pods
type: object
x-kubernetes-preserve-unknown-fields: true
revisionHistoryLimit:
description: RevisionHistoryLimit is the number of revisions to retain to allow rollback in the underlying Deployment.
format: int32
type: integer
secureSettings:
description: SecureSettings is a list of references to Kubernetes secrets containing sensitive configuration options for Kibana.
items:
@@ -4395,7 +4414,7 @@ spec:
description: Spec is the specification of the service.
properties:
allocateLoadBalancerNodePorts:
description: allocateLoadBalancerNodePorts defines if NodePorts will be automatically allocated for services with type LoadBalancer. Default is "true". It may be set to "false" if the cluster load-balancer does not rely on NodePorts. If the caller requests specific NodePorts (by specifying a value), those requests will be respected, regardless of this field. This field may only be set for services with type LoadBalancer and will be cleared if the type is changed to any other type. This field is beta-level and is only honored by servers that enable the ServiceLBNodePortControl feature.
description: allocateLoadBalancerNodePorts defines if NodePorts will be automatically allocated for services with type LoadBalancer. Default is "true". It may be set to "false" if the cluster load-balancer does not rely on NodePorts. If the caller requests specific NodePorts (by specifying a value), those requests will be respected, regardless of this field. This field may only be set for services with type LoadBalancer and will be cleared if the type is changed to any other type.
type: boolean
clusterIP:
description: 'clusterIP is the IP address of the service and is usually assigned randomly. If an address is specified manually, is in-range (as per system configuration), and is not in use, it will be allocated to the service; otherwise creation of the service will fail. This field may not be changed through updates unless the type field is also being changed to ExternalName (which requires this field to be blank) or the type field is being changed from ExternalName (in which case this field may optionally be specified, as describe above). Valid values are "None", empty string (""), or a valid IP address. Setting this to "None" makes a "headless service" (no virtual IP), which is useful when direct endpoint connections are preferred and proxying is not required. Only applies to types ClusterIP, NodePort, and LoadBalancer. If this field is specified when creating a Service of type ExternalName, creation will fail. This field will be wiped when updating a Service to type ExternalName. More info: https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies'
@@ -4438,7 +4457,7 @@ spec:
description: loadBalancerClass is the class of the load balancer implementation this Service belongs to. If specified, the value of this field must be a label-style identifier, with an optional prefix, e.g. "internal-vip" or "example.com/internal-vip". Unprefixed names are reserved for end-users. This field can only be set when the Service type is 'LoadBalancer'. If not set, the default load balancer implementation is used, today this is typically done through the cloud provider integration, but should apply for any default implementation. If set, it is assumed that a load balancer implementation is watching for Services with a matching class. Any default load balancer implementation (e.g. cloud providers) should ignore Services that set this field. This field can only be set when creating or updating a Service to type 'LoadBalancer'. Once set, it can not be changed. This field will be wiped when a service is updated to a non 'LoadBalancer' type.
type: string
loadBalancerIP:
description: 'Only applies to Service Type: LoadBalancer LoadBalancer will get created with the IP specified in this field. This feature depends on whether the underlying cloud-provider supports specifying the loadBalancerIP when a load balancer is created. This field will be ignored if the cloud-provider does not support the feature.'
description: 'Only applies to Service Type: LoadBalancer. This feature depends on whether the underlying cloud-provider supports specifying the loadBalancerIP when a load balancer is created. This field will be ignored if the cloud-provider does not support the feature. Deprecated: This field was under-specified and its meaning varies across implementations, and it cannot support dual-stack. As of Kubernetes v1.24, users are encouraged to use implementation-specific annotations when available. This field may be removed in a future API version.'
type: string
loadBalancerSourceRanges:
description: 'If specified and supported by the platform, this will restrict traffic through the cloud-provider load-balancer will be restricted to the specified client IPs. This field will be ignored if the cloud-provider does not support the feature." More info: https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/'
@@ -4451,7 +4470,7 @@ spec:
description: ServicePort contains information on service's port.
properties:
appProtocol:
description: The application protocol for this port. This field follows standard Kubernetes label syntax. Un-prefixed names are reserved for IANA standard service names (as per RFC-6335 and http://www.iana.org/assignments/service-names). Non-standard protocols should use prefixed names such as mycompany.com/my-custom-protocol.
description: The application protocol for this port. This field follows standard Kubernetes label syntax. Un-prefixed names are reserved for IANA standard service names (as per RFC-6335 and https://www.iana.org/assignments/service-names). Non-standard protocols should use prefixed names such as mycompany.com/my-custom-protocol.
type: string
name:
description: The name of this port within the service. This must be a DNS_LABEL. All ports within a ServiceSpec must have unique names. When considering the endpoints for a Service, this must match the 'name' field in the EndpointPort. Optional if only one ServicePort is defined on this service.
@@ -4606,10 +4625,4 @@ spec:
type: object
served: false
storage: false
status:
acceptedNames:
kind: ""
plural: ""
conditions: []
storedVersions: []

View File

@@ -14,7 +14,7 @@ metadata:
namespace: elastic-system
labels:
control-plane: elastic-operator
app.kubernetes.io/version: "2.2.0"
app.kubernetes.io/version: "2.4.0"
---
# Source: eck-operator/templates/webhook.yaml
apiVersion: v1
@@ -24,7 +24,7 @@ metadata:
namespace: elastic-system
labels:
control-plane: elastic-operator
app.kubernetes.io/version: "2.2.0"
app.kubernetes.io/version: "2.4.0"
---
# Source: eck-operator/templates/configmap.yaml
apiVersion: v1
@@ -34,7 +34,7 @@ metadata:
namespace: elastic-system
labels:
control-plane: elastic-operator
app.kubernetes.io/version: "2.2.0"
app.kubernetes.io/version: "2.4.0"
data:
eck.yaml: |-
log-verbosity: 0
@@ -54,6 +54,7 @@ data:
validate-storage-class: true
enable-webhook: true
webhook-name: elastic-webhook.k8s.elastic.co
enable-leader-election: true
---
# Source: eck-operator/templates/cluster-roles.yaml
apiVersion: rbac.authorization.k8s.io/v1
@@ -62,7 +63,7 @@ metadata:
name: elastic-operator
labels:
control-plane: elastic-operator
app.kubernetes.io/version: "2.2.0"
app.kubernetes.io/version: "2.4.0"
rules:
- apiGroups:
- "authorization.k8s.io"
@@ -70,6 +71,22 @@ rules:
- subjectaccessreviews
verbs:
- create
- apiGroups:
- coordination.k8s.io
resources:
- leases
verbs:
- create
- apiGroups:
- coordination.k8s.io
resources:
- leases
resourceNames:
- elastic-operator-leader
verbs:
- get
- watch
- update
- apiGroups:
- ""
resources:
@@ -251,7 +268,7 @@ metadata:
rbac.authorization.k8s.io/aggregate-to-edit: "true"
rbac.authorization.k8s.io/aggregate-to-admin: "true"
control-plane: elastic-operator
app.kubernetes.io/version: "2.2.0"
app.kubernetes.io/version: "2.4.0"
rules:
- apiGroups: ["elasticsearch.k8s.elastic.co"]
resources: ["elasticsearches"]
@@ -284,7 +301,7 @@ metadata:
rbac.authorization.k8s.io/aggregate-to-edit: "true"
rbac.authorization.k8s.io/aggregate-to-admin: "true"
control-plane: elastic-operator
app.kubernetes.io/version: "2.2.0"
app.kubernetes.io/version: "2.4.0"
rules:
- apiGroups: ["elasticsearch.k8s.elastic.co"]
resources: ["elasticsearches"]
@@ -315,7 +332,7 @@ metadata:
name: elastic-operator
labels:
control-plane: elastic-operator
app.kubernetes.io/version: "2.2.0"
app.kubernetes.io/version: "2.4.0"
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
@@ -333,7 +350,7 @@ metadata:
namespace: elastic-system
labels:
control-plane: elastic-operator
app.kubernetes.io/version: "2.2.0"
app.kubernetes.io/version: "2.4.0"
spec:
ports:
- name: https
@@ -350,7 +367,7 @@ metadata:
namespace: elastic-system
labels:
control-plane: elastic-operator
app.kubernetes.io/version: "2.2.0"
app.kubernetes.io/version: "2.4.0"
spec:
selector:
matchLabels:
@@ -363,7 +380,7 @@ spec:
# Rename the fields "error" to "error.message" and "source" to "event.source"
# This is to avoid a conflict with the ECS "error" and "source" documents.
"co.elastic.logs/raw": "[{\"type\":\"container\",\"json.keys_under_root\":true,\"paths\":[\"/var/log/containers/*${data.kubernetes.container.id}.log\"],\"processors\":[{\"convert\":{\"mode\":\"rename\",\"ignore_missing\":true,\"fields\":[{\"from\":\"error\",\"to\":\"_error\"}]}},{\"convert\":{\"mode\":\"rename\",\"ignore_missing\":true,\"fields\":[{\"from\":\"_error\",\"to\":\"error.message\"}]}},{\"convert\":{\"mode\":\"rename\",\"ignore_missing\":true,\"fields\":[{\"from\":\"source\",\"to\":\"_source\"}]}},{\"convert\":{\"mode\":\"rename\",\"ignore_missing\":true,\"fields\":[{\"from\":\"_source\",\"to\":\"event.source\"}]}}]}]"
"checksum/config": 302bbb79b6fb0ffa41fcc06e164252c7dad887cf4d8149c8e1e5203c7651277e
"checksum/config": a99a5f63f628a1ca8df440c12506cdfbf17827a1175dc5765b05f22f92b12b95
labels:
control-plane: elastic-operator
spec:
@@ -372,7 +389,7 @@ spec:
securityContext:
runAsNonRoot: true
containers:
- image: "docker.elastic.co/eck/eck-operator:2.2.0"
- image: "docker.elastic.co/eck/eck-operator:2.4.0"
imagePullPolicy: IfNotPresent
name: manager
args:
@@ -423,7 +440,7 @@ metadata:
name: elastic-webhook.k8s.elastic.co
labels:
control-plane: elastic-operator
app.kubernetes.io/version: "2.2.0"
app.kubernetes.io/version: "2.4.0"
webhooks:
- clientConfig:
caBundle: Cg==

View File

@@ -79,7 +79,6 @@ metadata:
namespace: etherpad
annotations:
kubernetes.io/ingress.class: traefik
cert-manager.io/cluster-issuer: default
traefik.ingress.kubernetes.io/router.entrypoints: websecure
traefik.ingress.kubernetes.io/router.tls: "true"
external-dns.alpha.kubernetes.io/target: traefik.k-space.ee
@@ -97,8 +96,7 @@ spec:
number: 9001
tls:
- hosts:
- pad.k-space.ee
secretName: pad-tls
- "*.k-space.ee"
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy

19
grafana/README.md Normal file
View File

@@ -0,0 +1,19 @@
# Grafana
```
kubectl create namespace grafana
kubectl apply -n grafana -f application.yml
```
## OIDC secret
See Authelia README on provisioning and updating OIDC secrets for Grafana
## Grafana post deployment steps
* Configure Prometheus datasource with URL set to
`http://prometheus-operated.prometheus-operator.svc.cluster.local:9090`
* Configure Elasticsearch datasource with URL set to
`http://elasticsearch.elastic-system.svc.cluster.local`,
Time field name set to `timestamp` and
ElasticSearch version set to `7.10+`

135
grafana/application.yml Normal file
View File

@@ -0,0 +1,135 @@
---
apiVersion: v1
kind: ConfigMap
metadata:
name: grafana-config
data:
grafana.ini: |
[log]
level = warn
[server]
domain = grafana.k-space.ee
root_url = https://%(domain)s/
[auth.generic_oauth]
name = OAuth
icon = signin
enabled = true
client_id = grafana
scopes = openid profile email groups
empty_scopes = false
auth_url = https://auth.k-space.ee/api/oidc/authorize
token_url = https://auth.k-space.ee/api/oidc/token
api_url = https://auth.k-space.ee/api/oidc/userinfo
allow_sign_up = true
role_attribute_path = contains(groups[*], 'Grafana Admins') && 'Admin' || 'Viewer'
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
labels:
app: grafana
name: grafana
spec:
revisionHistoryLimit: 0
serviceName: grafana
selector:
matchLabels:
app: grafana
template:
metadata:
labels:
app: grafana
spec:
securityContext:
fsGroup: 472
containers:
- name: grafana
image: grafana/grafana:8.5.0
securityContext:
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 472
envFrom:
- secretRef:
name: oidc-secret
ports:
- containerPort: 3000
name: http-grafana
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /robots.txt
port: 3000
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 30
successThreshold: 1
timeoutSeconds: 2
livenessProbe:
failureThreshold: 3
initialDelaySeconds: 30
periodSeconds: 10
successThreshold: 1
tcpSocket:
port: 3000
timeoutSeconds: 1
resources:
requests:
cpu: 250m
memory: 750Mi
volumeMounts:
- mountPath: /var/lib/grafana
name: grafana-data
- mountPath: /etc/grafana
name: grafana-config
volumes:
- name: grafana-config
configMap:
name: grafana-config
volumeClaimTemplates:
- metadata:
name: grafana-data
spec:
storageClassName: longhorn
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
apiVersion: v1
kind: Service
metadata:
name: grafana
spec:
ports:
- port: 80
protocol: TCP
targetPort: http-grafana
selector:
app: grafana
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: grafana
annotations:
traefik.ingress.kubernetes.io/router.entrypoints: websecure
traefik.ingress.kubernetes.io/router.tls: "true"
external-dns.alpha.kubernetes.io/target: traefik.k-space.ee
spec:
rules:
- host: grafana.k-space.ee
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: grafana
port:
number: 80
tls:
- hosts:
- "*.k-space.ee"

View File

@@ -397,7 +397,6 @@ spec:
containers:
- name: core
image: goharbor/harbor-core:v2.4.2
imagePullPolicy: IfNotPresent
startupProbe:
httpGet:
path: /api/v2.0/ping
@@ -406,16 +405,9 @@ spec:
failureThreshold: 360
initialDelaySeconds: 10
periodSeconds: 10
livenessProbe:
httpGet:
path: /api/v2.0/ping
scheme: HTTP
port: 8080
failureThreshold: 2
periodSeconds: 10
readinessProbe:
httpGet:
path: /api/v2.0/ping
path: /api/v2.0/projects
scheme: HTTP
port: 8080
failureThreshold: 2
@@ -472,6 +464,13 @@ spec:
secret:
- name: psc
emptyDir: {}
nodeSelector:
dedicated: storage
tolerations:
- key: dedicated
operator: Equal
value: storage
effect: NoSchedule
---
# Source: harbor/templates/jobservice/jobservice-dpl.yaml
apiVersion: apps/v1
@@ -502,14 +501,6 @@ spec:
containers:
- name: jobservice
image: goharbor/harbor-jobservice:v2.4.2
imagePullPolicy: IfNotPresent
livenessProbe:
httpGet:
path: /api/v1/stats
scheme: HTTP
port: 8080
initialDelaySeconds: 300
periodSeconds: 10
readinessProbe:
httpGet:
path: /api/v1/stats
@@ -544,6 +535,13 @@ spec:
- name: job-logs
persistentVolumeClaim:
claimName: harbor-jobservice
nodeSelector:
dedicated: storage
tolerations:
- key: dedicated
operator: Equal
value: storage
effect: NoSchedule
---
# Source: harbor/templates/portal/deployment.yaml
apiVersion: apps/v1
@@ -574,14 +572,6 @@ spec:
containers:
- name: portal
image: goharbor/harbor-portal:v2.4.2
imagePullPolicy: IfNotPresent
livenessProbe:
httpGet:
path: /
scheme: HTTP
port: 8080
initialDelaySeconds: 300
periodSeconds: 10
readinessProbe:
httpGet:
path: /
@@ -599,6 +589,13 @@ spec:
- name: portal-config
configMap:
name: "harbor-portal"
nodeSelector:
dedicated: storage
tolerations:
- key: dedicated
operator: Equal
value: storage
effect: NoSchedule
---
# Source: harbor/templates/registry/registry-dpl.yaml
apiVersion: apps/v1
@@ -629,14 +626,6 @@ spec:
containers:
- name: registry
image: goharbor/registry-photon:v2.4.2
imagePullPolicy: IfNotPresent
livenessProbe:
httpGet:
path: /
scheme: HTTP
port: 5000
initialDelaySeconds: 300
periodSeconds: 10
readinessProbe:
httpGet:
path: /
@@ -664,14 +653,6 @@ spec:
subPath: config.yml
- name: registryctl
image: goharbor/harbor-registryctl:v2.4.2
imagePullPolicy: IfNotPresent
livenessProbe:
httpGet:
path: /api/health
scheme: HTTP
port: 8080
initialDelaySeconds: 300
periodSeconds: 10
readinessProbe:
httpGet:
path: /api/health
@@ -722,6 +703,13 @@ spec:
- name: registry-data
persistentVolumeClaim:
claimName: harbor-registry
nodeSelector:
dedicated: storage
tolerations:
- key: dedicated
operator: Equal
value: storage
effect: NoSchedule
---
# Source: harbor/templates/database/database-ss.yaml
apiVersion: apps/v1
@@ -756,7 +744,6 @@ spec:
# we may remove it after several releases
- name: "data-migrator"
image: goharbor/harbor-db:v2.4.2
imagePullPolicy: IfNotPresent
command: ["/bin/sh"]
args: ["-c", "[ -e /var/lib/postgresql/data/postgresql.conf ] && [ ! -d /var/lib/postgresql/data/pgdata ] && mkdir -m 0700 /var/lib/postgresql/data/pgdata && mv /var/lib/postgresql/data/* /var/lib/postgresql/data/pgdata/ || true"]
volumeMounts:
@@ -769,7 +756,6 @@ spec:
# as "fsGroup" applied before the init container running, the container has enough permission to execute the command
- name: "data-permissions-ensurer"
image: goharbor/harbor-db:v2.4.2
imagePullPolicy: IfNotPresent
command: ["/bin/sh"]
args: ["-c", "chmod -R 700 /var/lib/postgresql/data/pgdata || true"]
volumeMounts:
@@ -779,13 +765,6 @@ spec:
containers:
- name: database
image: goharbor/harbor-db:v2.4.2
imagePullPolicy: IfNotPresent
livenessProbe:
exec:
command:
- /docker-healthcheck.sh
initialDelaySeconds: 300
periodSeconds: 10
readinessProbe:
exec:
command:
@@ -811,6 +790,13 @@ spec:
emptyDir:
medium: Memory
sizeLimit: 512Mi
nodeSelector:
dedicated: storage
tolerations:
- key: dedicated
operator: Equal
value: storage
effect: NoSchedule
volumeClaimTemplates:
- metadata:
name: "database-data"
@@ -853,12 +839,6 @@ spec:
containers:
- name: redis
image: goharbor/redis-photon:v2.4.2
imagePullPolicy: IfNotPresent
livenessProbe:
tcpSocket:
port: 6379
initialDelaySeconds: 300
periodSeconds: 10
readinessProbe:
tcpSocket:
port: 6379
@@ -868,6 +848,13 @@ spec:
- name: data
mountPath: /var/lib/redis
subPath:
nodeSelector:
dedicated: storage
tolerations:
- key: dedicated
operator: Equal
value: storage
effect: NoSchedule
volumeClaimTemplates:
- metadata:
name: data
@@ -970,15 +957,6 @@ spec:
mountPath: /home/scanner/.cache
subPath:
readOnly: false
livenessProbe:
httpGet:
scheme: HTTP
path: /probe/healthy
port: api-server
initialDelaySeconds: 5
periodSeconds: 10
successThreshold: 1
failureThreshold: 10
readinessProbe:
httpGet:
scheme: HTTP
@@ -995,6 +973,13 @@ spec:
requests:
cpu: 200m
memory: 512Mi
nodeSelector:
dedicated: storage
tolerations:
- key: dedicated
operator: Equal
value: storage
effect: NoSchedule
volumeClaimTemplates:
- metadata:
name: data
@@ -1016,7 +1001,6 @@ metadata:
labels:
app: harbor
annotations:
cert-manager.io/cluster-issuer: default
external-dns.alpha.kubernetes.io/target: traefik.k-space.ee
ingress.kubernetes.io/proxy-body-size: "0"
ingress.kubernetes.io/ssl-redirect: "true"
@@ -1027,9 +1011,8 @@ metadata:
traefik.ingress.kubernetes.io/router.tls: "true"
spec:
tls:
- secretName: harbor-tls
hosts:
- harbor.k-space.ee
- hosts:
- "*.k-space.ee"
rules:
- http:
paths:

View File

@@ -219,3 +219,276 @@ spec:
selector:
matchLabels:
app.kubernetes.io/name: kube-state-metrics
---
apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
name: kube-state-metrics
spec:
groups:
- name: kube-state-metrics
rules:
- alert: KubernetesNodeReady
expr: kube_node_status_condition{condition="Ready",status="true"} == 0
for: 10m
labels:
severity: critical
annotations:
summary: Kubernetes Node ready (instance {{ $labels.instance }})
description: "Node {{ $labels.node }} has been unready for a long time\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- alert: KubernetesMemoryPressure
expr: kube_node_status_condition{condition="MemoryPressure",status="true"} == 1
for: 2m
labels:
severity: critical
annotations:
summary: Kubernetes memory pressure (instance {{ $labels.instance }})
description: "{{ $labels.node }} has MemoryPressure condition\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- alert: KubernetesDiskPressure
expr: kube_node_status_condition{condition="DiskPressure",status="true"} == 1
for: 2m
labels:
severity: critical
annotations:
summary: Kubernetes disk pressure (instance {{ $labels.instance }})
description: "{{ $labels.node }} has DiskPressure condition\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- alert: KubernetesOutOfDisk
expr: kube_node_status_condition{condition="OutOfDisk",status="true"} == 1
for: 2m
labels:
severity: critical
annotations:
summary: Kubernetes out of disk (instance {{ $labels.instance }})
description: "{{ $labels.node }} has OutOfDisk condition\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- alert: KubernetesOutOfCapacity
expr: sum by (node) ((kube_pod_status_phase{phase="Running"} == 1) + on(uid) group_left(node) (0 * kube_pod_info{pod_template_hash=""})) / sum by (node) (kube_node_status_allocatable{resource="pods"}) * 100 > 90
for: 2m
labels:
severity: warning
annotations:
summary: Kubernetes out of capacity (instance {{ $labels.instance }})
description: "{{ $labels.node }} is out of capacity\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- alert: KubernetesContainerOomKiller
expr: (kube_pod_container_status_restarts_total - kube_pod_container_status_restarts_total offset 10m >= 1) and ignoring (reason) min_over_time(kube_pod_container_status_last_terminated_reason{reason="OOMKilled"}[10m]) == 1
for: 0m
labels:
severity: warning
annotations:
summary: Kubernetes container oom killer (instance {{ $labels.instance }})
description: "Container {{ $labels.container }} in pod {{ $labels.namespace }}/{{ $labels.pod }} has been OOMKilled {{ $value }} times in the last 10 minutes.\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- alert: KubernetesJobFailed
expr: kube_job_status_failed > 0
for: 0m
labels:
severity: warning
annotations:
summary: Kubernetes Job failed (instance {{ $labels.instance }})
description: "Job {{$labels.namespace}}/{{$labels.exported_job}} failed to complete\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- alert: KubernetesCronjobSuspended
expr: kube_cronjob_spec_suspend != 0
for: 0m
labels:
severity: warning
annotations:
summary: Kubernetes CronJob suspended (instance {{ $labels.instance }})
description: "CronJob {{ $labels.namespace }}/{{ $labels.cronjob }} is suspended\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- alert: KubernetesPersistentvolumeclaimPending
expr: kube_persistentvolumeclaim_status_phase{phase="Pending"} == 1
for: 2m
labels:
severity: warning
annotations:
summary: Kubernetes PersistentVolumeClaim pending (instance {{ $labels.instance }})
description: "PersistentVolumeClaim {{ $labels.namespace }}/{{ $labels.persistentvolumeclaim }} is pending\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- alert: KubernetesVolumeOutOfDiskSpace
expr: kubelet_volume_stats_available_bytes / kubelet_volume_stats_capacity_bytes * 100 < 10
for: 2m
labels:
severity: warning
annotations:
summary: Kubernetes Volume out of disk space (instance {{ $labels.instance }})
description: "Volume is almost full (< 10% left)\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- alert: KubernetesVolumeFullInFourDays
expr: predict_linear(kubelet_volume_stats_available_bytes[6h], 4 * 24 * 3600) < 0
for: 0m
labels:
severity: critical
annotations:
summary: Kubernetes Volume full in four days (instance {{ $labels.instance }})
description: "{{ $labels.namespace }}/{{ $labels.persistentvolumeclaim }} is expected to fill up within four days. Currently {{ $value | humanize }}% is available.\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- alert: KubernetesPersistentvolumeError
expr: kube_persistentvolume_status_phase{phase=~"Failed|Pending", job="kube-state-metrics"} > 0
for: 0m
labels:
severity: critical
annotations:
summary: Kubernetes PersistentVolume error (instance {{ $labels.instance }})
description: "Persistent volume is in bad state\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- alert: KubernetesStatefulsetDown
expr: (kube_statefulset_status_replicas_ready / kube_statefulset_status_replicas_current) != 1
for: 1m
labels:
severity: critical
annotations:
summary: Kubernetes StatefulSet down (instance {{ $labels.instance }})
description: "A StatefulSet went down\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- alert: KubernetesHpaScalingAbility
expr: kube_horizontalpodautoscaler_status_condition{status="false", condition="AbleToScale"} == 1
for: 2m
labels:
severity: warning
annotations:
summary: Kubernetes HPA scaling ability (instance {{ $labels.instance }})
description: "Pod is unable to scale\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- alert: KubernetesHpaMetricAvailability
expr: kube_horizontalpodautoscaler_status_condition{status="false", condition="ScalingActive"} == 1
for: 0m
labels:
severity: warning
annotations:
summary: Kubernetes HPA metric availability (instance {{ $labels.instance }})
description: "HPA is not able to collect metrics\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- alert: KubernetesHpaScaleCapability
expr: kube_horizontalpodautoscaler_status_desired_replicas >= kube_horizontalpodautoscaler_spec_max_replicas
for: 2m
labels:
severity: info
annotations:
summary: Kubernetes HPA scale capability (instance {{ $labels.instance }})
description: "The maximum number of desired Pods has been hit\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- alert: KubernetesPodNotHealthy
expr: min_over_time(sum by (namespace, pod) (kube_pod_status_phase{phase=~"Pending|Unknown|Failed"})[15m:1m]) > 0
for: 0m
labels:
severity: critical
annotations:
summary: Kubernetes Pod not healthy (instance {{ $labels.instance }})
description: "Pod has been in a non-ready state for longer than 15 minutes.\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- alert: KubernetesPodCrashLooping
expr: increase(kube_pod_container_status_restarts_total[1m]) > 3
for: 2m
labels:
severity: warning
annotations:
summary: Kubernetes pod crash looping (instance {{ $labels.instance }})
description: "Pod {{ $labels.pod }} is crash looping\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- alert: KubernetesReplicassetMismatch
expr: kube_replicaset_spec_replicas != kube_replicaset_status_ready_replicas
for: 10m
labels:
severity: warning
annotations:
summary: Kubernetes ReplicasSet mismatch (instance {{ $labels.instance }})
description: "Deployment Replicas mismatch\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- alert: KubernetesDeploymentReplicasMismatch
expr: kube_deployment_spec_replicas != kube_deployment_status_replicas_available
for: 10m
labels:
severity: warning
annotations:
summary: Kubernetes Deployment replicas mismatch (instance {{ $labels.instance }})
description: "Deployment Replicas mismatch\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- alert: KubernetesStatefulsetReplicasMismatch
expr: kube_statefulset_status_replicas_ready != kube_statefulset_status_replicas
for: 10m
labels:
severity: warning
annotations:
summary: Kubernetes StatefulSet replicas mismatch (instance {{ $labels.instance }})
description: "A StatefulSet does not match the expected number of replicas.\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- alert: KubernetesDeploymentGenerationMismatch
expr: kube_deployment_status_observed_generation != kube_deployment_metadata_generation
for: 10m
labels:
severity: critical
annotations:
summary: Kubernetes Deployment generation mismatch (instance {{ $labels.instance }})
description: "A Deployment has failed but has not been rolled back.\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- alert: KubernetesStatefulsetGenerationMismatch
expr: kube_statefulset_status_observed_generation != kube_statefulset_metadata_generation
for: 10m
labels:
severity: critical
annotations:
summary: Kubernetes StatefulSet generation mismatch (instance {{ $labels.instance }})
description: "A StatefulSet has failed but has not been rolled back.\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- alert: KubernetesStatefulsetUpdateNotRolledOut
expr: max without (revision) (kube_statefulset_status_current_revision unless kube_statefulset_status_update_revision) * (kube_statefulset_replicas != kube_statefulset_status_replicas_updated)
for: 10m
labels:
severity: warning
annotations:
summary: Kubernetes StatefulSet update not rolled out (instance {{ $labels.instance }})
description: "StatefulSet update has not been rolled out.\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- alert: KubernetesDaemonsetRolloutStuck
expr: kube_daemonset_status_number_ready / kube_daemonset_status_desired_number_scheduled * 100 < 100 or kube_daemonset_status_desired_number_scheduled - kube_daemonset_status_current_number_scheduled > 0
for: 10m
labels:
severity: warning
annotations:
summary: Kubernetes DaemonSet rollout stuck (instance {{ $labels.instance }})
description: "Some Pods of DaemonSet are not scheduled or not ready\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- alert: KubernetesDaemonsetMisscheduled
expr: kube_daemonset_status_number_misscheduled > 0
for: 1m
labels:
severity: critical
annotations:
summary: Kubernetes DaemonSet misscheduled (instance {{ $labels.instance }})
description: "Some DaemonSet Pods are running where they are not supposed to run\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- alert: KubernetesCronjobTooLong
expr: time() - kube_cronjob_next_schedule_time > 3600
for: 0m
labels:
severity: warning
annotations:
summary: Kubernetes CronJob too long (instance {{ $labels.instance }})
description: "CronJob {{ $labels.namespace }}/{{ $labels.cronjob }} is taking more than 1h to complete.\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- alert: KubernetesJobSlowCompletion
expr: kube_job_spec_completions - kube_job_status_succeeded > 0
for: 12h
labels:
severity: critical
annotations:
summary: Kubernetes job slow completion (instance {{ $labels.instance }})
description: "Kubernetes Job {{ $labels.namespace }}/{{ $labels.job_name }} did not complete in time.\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- alert: KubernetesApiServerErrors
expr: sum(rate(apiserver_request_total{job="apiserver",code=~"^(?:5..)$"}[1m])) / sum(rate(apiserver_request_total{job="apiserver"}[1m])) * 100 > 3
for: 2m
labels:
severity: critical
annotations:
summary: Kubernetes API server errors (instance {{ $labels.instance }})
description: "Kubernetes API server is experiencing high error rate\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- alert: KubernetesApiClientErrors
expr: (sum(rate(rest_client_requests_total{code=~"(4|5).."}[1m])) by (instance, job) / sum(rate(rest_client_requests_total[1m])) by (instance, job)) * 100 > 1
for: 2m
labels:
severity: critical
annotations:
summary: Kubernetes API client errors (instance {{ $labels.instance }})
description: "Kubernetes API client is experiencing high error rate\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- alert: KubernetesClientCertificateExpiresNextWeek
expr: apiserver_client_certificate_expiration_seconds_count{job="apiserver"} > 0 and histogram_quantile(0.01, sum by (job, le) (rate(apiserver_client_certificate_expiration_seconds_bucket{job="apiserver"}[5m]))) < 7*24*60*60
for: 0m
labels:
severity: warning
annotations:
summary: Kubernetes client certificate expires next week (instance {{ $labels.instance }})
description: "A client certificate used to authenticate to the apiserver is expiring next week.\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- alert: KubernetesClientCertificateExpiresSoon
expr: apiserver_client_certificate_expiration_seconds_count{job="apiserver"} > 0 and histogram_quantile(0.01, sum by (job, le) (rate(apiserver_client_certificate_expiration_seconds_bucket{job="apiserver"}[5m]))) < 24*60*60
for: 0m
labels:
severity: critical
annotations:
summary: Kubernetes client certificate expires soon (instance {{ $labels.instance }})
description: "A client certificate used to authenticate to the apiserver is expiring in less than 24.0 hours.\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- alert: KubernetesApiServerLatency
expr: histogram_quantile(0.99, sum(rate(apiserver_request_latencies_bucket{subresource!="log",verb!~"^(?:CONNECT|WATCHLIST|WATCH|PROXY)$"} [10m])) WITHOUT (instance, resource)) / 1e+06 > 1
for: 2m
labels:
severity: warning
annotations:
summary: Kubernetes API server latency (instance {{ $labels.instance }})
description: "Kubernetes API server has a 99th percentile latency of {{ $value }} seconds for {{ $labels.verb }} {{ $labels.resource }}.\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"

View File

@@ -0,0 +1,197 @@
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: metrics-server
name: metrics-server
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
k8s-app: metrics-server
rbac.authorization.k8s.io/aggregate-to-admin: "true"
rbac.authorization.k8s.io/aggregate-to-edit: "true"
rbac.authorization.k8s.io/aggregate-to-view: "true"
name: system:aggregated-metrics-reader
rules:
- apiGroups:
- metrics.k8s.io
resources:
- pods
- nodes
verbs:
- get
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
k8s-app: metrics-server
name: system:metrics-server
rules:
- apiGroups:
- ""
resources:
- nodes/metrics
verbs:
- get
- apiGroups:
- ""
resources:
- pods
- nodes
verbs:
- get
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
labels:
k8s-app: metrics-server
name: metrics-server-auth-reader
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: extension-apiserver-authentication-reader
subjects:
- kind: ServiceAccount
name: metrics-server
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
k8s-app: metrics-server
name: metrics-server:system:auth-delegator
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:auth-delegator
subjects:
- kind: ServiceAccount
name: metrics-server
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
k8s-app: metrics-server
name: system:metrics-server
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:metrics-server
subjects:
- kind: ServiceAccount
name: metrics-server
namespace: kube-system
---
apiVersion: v1
kind: Service
metadata:
labels:
k8s-app: metrics-server
name: metrics-server
namespace: kube-system
spec:
ports:
- name: https
port: 443
protocol: TCP
targetPort: https
selector:
k8s-app: metrics-server
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
k8s-app: metrics-server
name: metrics-server
namespace: kube-system
spec:
selector:
matchLabels:
k8s-app: metrics-server
strategy:
rollingUpdate:
maxUnavailable: 0
template:
metadata:
labels:
k8s-app: metrics-server
spec:
containers:
- args:
- --cert-dir=/tmp
- --secure-port=4443
- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
- --kubelet-use-node-status-port
- --kubelet-insecure-tls
- --metric-resolution=15s
image: k8s.gcr.io/metrics-server/metrics-server:v0.6.1
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
httpGet:
path: /livez
port: https
scheme: HTTPS
periodSeconds: 10
name: metrics-server
ports:
- containerPort: 4443
name: https
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /readyz
port: https
scheme: HTTPS
initialDelaySeconds: 20
periodSeconds: 10
resources:
requests:
cpu: 100m
memory: 200Mi
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 1000
volumeMounts:
- mountPath: /tmp
name: tmp-dir
nodeSelector:
kubernetes.io/os: linux
priorityClassName: system-cluster-critical
serviceAccountName: metrics-server
volumes:
- emptyDir: {}
name: tmp-dir
---
apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:
labels:
k8s-app: metrics-server
name: v1beta1.metrics.k8s.io
spec:
group: metrics.k8s.io
groupPriorityMinimum: 100
insecureSkipTLSVerify: true
service:
name: metrics-server
namespace: kube-system
version: v1beta1
versionPriority: 100

View File

@@ -269,7 +269,6 @@ metadata:
certManager: "true"
rewriteTarget: "true"
annotations:
cert-manager.io/cluster-issuer: default
external-dns.alpha.kubernetes.io/target: traefik.k-space.ee
kubernetes.io/ingress.class: traefik
traefik.ingress.kubernetes.io/router.entrypoints: websecure
@@ -289,5 +288,4 @@ spec:
number: 80
tls:
- hosts:
- dashboard.k-space.ee
secretName: dashboard-tls
- "*.k-space.ee"

View File

@@ -14,7 +14,7 @@ To deploy:
```
kubectl create namespace logging
kubectl apply -n logging -f mongodb-support.yml -f application.yml -f filebeat.yml -f networkpolicy-base.yml
kubectl apply -n logging -f zinc.yml -f application.yml -f filebeat.yml -f networkpolicy-base.yml
kubectl rollout restart -n logging daemonset.apps/filebeat
```

View File

@@ -1,452 +0,0 @@
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: elasticsearch
labels:
app: elasticsearch
spec:
serviceName: elasticsearch
revisionHistoryLimit: 0
replicas: 1
selector:
matchLabels:
app: elasticsearch
template:
metadata:
labels:
app: elasticsearch
spec:
securityContext:
fsGroup: 1000
containers:
- name: elasticsearch
image: elasticsearch:7.17.3
securityContext:
runAsNonRoot: true
runAsUser: 1000
env:
- name: discovery.type
value: single-node
- name: xpack.security.enabled
value: "false"
ports:
- containerPort: 9200
readinessProbe:
httpGet:
path: /_cluster/health
port: 9200
initialDelaySeconds: 5
periodSeconds: 10
failureThreshold: 3
successThreshold: 1
timeoutSeconds: 5
resources:
limits:
memory: "2147483648"
volumeMounts:
- name: elasticsearch-data
mountPath: /usr/share/elasticsearch/data
- name: elasticsearch-tmp
mountPath: /tmp/
volumes:
- emptyDir: {}
name: elasticsearch-keystore
- emptyDir: {}
name: elasticsearch-tmp
- emptyDir: {}
name: elasticsearch-logs
volumeClaimTemplates:
- metadata:
name: elasticsearch-data
spec:
accessModes:
- "ReadWriteOnce"
resources:
requests:
storage: "10Gi"
storageClassName: longhorn
---
apiVersion: v1
kind: Service
metadata:
name: elasticsearch
labels:
app: elasticsearch
spec:
ports:
- name: api
port: 80
targetPort: 9200
selector:
app: elasticsearch
---
apiVersion: v1
kind: Service
metadata:
name: graylog-gelf-tcp
labels:
app: graylog
spec:
ports:
- name: graylog-gelf-tcp
port: 12201
protocol: TCP
targetPort: 12201
selector:
app: graylog
---
apiVersion: v1
kind: Service
metadata:
name: graylog-logstash
labels:
app: graylog
spec:
ports:
- name: graylog-logstash
port: 5044
protocol: TCP
selector:
app: graylog
---
apiVersion: v1
kind: Service
metadata:
name: graylog-syslog-tcp
labels:
app: graylog
annotations:
external-dns.alpha.kubernetes.io/hostname: syslog.k-space.ee
metallb.universe.tf/allow-shared-ip: syslog.k-space.ee
spec:
type: LoadBalancer
externalTrafficPolicy: Local
loadBalancerIP: 172.20.51.4
ports:
- name: graylog-syslog
port: 514
protocol: TCP
selector:
app: graylog
---
apiVersion: v1
kind: Service
metadata:
name: graylog-syslog-udp
labels:
app: graylog
annotations:
external-dns.alpha.kubernetes.io/hostname: syslog.k-space.ee
metallb.universe.tf/allow-shared-ip: syslog.k-space.ee
spec:
type: LoadBalancer
externalTrafficPolicy: Local
loadBalancerIP: 172.20.51.4
ports:
- name: graylog-syslog
port: 514
protocol: UDP
selector:
app: graylog
---
apiVersion: v1
kind: Service
metadata:
name: graylog
labels:
app: graylog
spec:
ports:
- name: graylog
port: 9000
protocol: TCP
selector:
app: graylog
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: graylog
labels:
app: graylog
annotations:
keel.sh/policy: minor
keel.sh/trigger: poll
keel.sh/pollSchedule: "@midnight"
spec:
serviceName: graylog
revisionHistoryLimit: 0
replicas: 1
selector:
matchLabels:
app: graylog
template:
metadata:
labels:
app: graylog
annotations:
prometheus.io/port: "9833"
prometheus.io/scrape: "true"
spec:
securityContext:
fsGroup: 1100
volumes:
- name: graylog-config
downwardAPI:
items:
- path: id
fieldRef:
fieldPath: metadata.name
containers:
- name: graylog
image: graylog/graylog:4.3
env:
- name: GRAYLOG_MONGODB_URI
valueFrom:
secretKeyRef:
name: mongodb-application-readwrite
key: connectionString.standard
- name: GRAYLOG_PROMETHEUS_EXPORTER_ENABLED
value: "true"
- name: GRAYLOG_PROMETHEUS_EXPORTER_BIND_ADDRESS
value: "0.0.0.0:9833"
- name: GRAYLOG_NODE_ID_FILE
value: /config/id
- name: GRAYLOG_HTTP_EXTERNAL_URI
value: "https://graylog.k-space.ee/"
- name: GRAYLOG_TRUSTED_PROXIES
value: "0.0.0.0/0"
- name: GRAYLOG_ELASTICSEARCH_HOSTS
value: "http://elasticsearch"
- name: GRAYLOG_MESSAGE_JOURNAL_ENABLED
value: "false"
- name: GRAYLOG_ROTATION_STRATEGY
value: "size"
- name: GRAYLOG_ELASTICSEARCH_MAX_SIZE_PER_INDEX
value: "268435456"
- name: GRAYLOG_ELASTICSEARCH_MAX_NUMBER_OF_INDICES
value: "16"
envFrom:
- secretRef:
name: graylog-secrets
securityContext:
runAsNonRoot: true
runAsUser: 1100
ports:
- containerPort: 9000
name: graylog
- containerPort: 9833
name: graylog-metrics
livenessProbe:
httpGet:
path: /api/system/lbstatus
port: 9000
initialDelaySeconds: 5
periodSeconds: 30
failureThreshold: 3
successThreshold: 1
timeoutSeconds: 5
readinessProbe:
httpGet:
path: /api/system/lbstatus
port: 9000
initialDelaySeconds: 5
periodSeconds: 10
failureThreshold: 3
successThreshold: 1
timeoutSeconds: 5
volumeMounts:
- name: graylog-config
mountPath: /config
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: graylog
annotations:
cert-manager.io/cluster-issuer: default
traefik.ingress.kubernetes.io/router.entrypoints: websecure
traefik.ingress.kubernetes.io/router.tls: "true"
external-dns.alpha.kubernetes.io/target: traefik.k-space.ee
traefik.ingress.kubernetes.io/router.middlewares: traefik-sso@kubernetescrd
spec:
rules:
- host: graylog.k-space.ee
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: graylog
port:
number: 9000
tls:
- hosts:
- graylog.k-space.ee
secretName: graylog-tls
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: graylog
spec:
podSelector:
matchLabels:
app: graylog
policyTypes:
- Ingress
- Egress
egress:
- to:
- podSelector:
matchLabels:
app: elasticsearch
ports:
- port: 9200
- to:
- podSelector:
matchLabels:
app: mongodb-svc
ports:
- port: 27017
ingress:
- from:
- ipBlock:
cidr: 172.23.0.0/16
- ipBlock:
cidr: 172.21.0.0/16
- ipBlock:
cidr: 100.102.0.0/16
ports:
- protocol: UDP
port: 514
- protocol: TCP
port: 514
- from:
- podSelector:
matchLabels:
app: filebeat
ports:
- protocol: TCP
port: 5044
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: monitoring
podSelector:
matchLabels:
app: prometheus
ports:
- port: 9833
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: traefik
podSelector:
matchLabels:
app.kubernetes.io/name: traefik
ports:
- protocol: TCP
port: 9000
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: elasticsearch
spec:
podSelector:
matchLabels:
app: elasticsearch
policyTypes:
- Ingress
- Egress
ingress:
- from:
- podSelector:
matchLabels:
app: graylog
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: monitoring
podSelector:
matchLabels:
app: grafana
egress:
- to:
- ipBlock:
# geoip.elastic.co updates
cidr: 0.0.0.0/0
ports:
- port: 443
---
apiVersion: mongodbcommunity.mongodb.com/v1
kind: MongoDBCommunity
metadata:
name: mongodb
spec:
members: 3
type: ReplicaSet
version: "5.0.9"
security:
authentication:
modes: ["SCRAM"]
users:
- name: readwrite
db: application
passwordSecretRef:
name: mongodb-application-readwrite-password
roles:
- name: readWrite
db: application
scramCredentialsSecretName: mongodb-application-readwrite
- name: readonly
db: application
passwordSecretRef:
name: mongodb-application-readonly-password
roles:
- name: readOnly
db: application
scramCredentialsSecretName: mongodb-application-readonly
statefulSet:
spec:
template:
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- mongodb-svc
topologyKey: kubernetes.io/hostname
nodeSelector:
dedicated: storage
tolerations:
- key: dedicated
operator: Equal
value: storage
effect: NoSchedule
volumeClaimTemplates:
- metadata:
name: logs-volume
spec:
storageClassName: local-path
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 512Mi
- metadata:
name: data-volume
spec:
storageClassName: local-path
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi

View File

@@ -6,18 +6,15 @@ metadata:
namespace: logging
data:
filebeat.yml: |-
logging:
level: warning
setup:
ilm:
enabled: false
template:
name: filebeat
pattern: filebeat-*
http.enabled: true
filebeat.inputs:
- type: container
paths:
- /var/log/containers/*.log
processors:
- add_kubernetes_metadata:
in_cluster: true
host: ${NODE_NAME}
matchers:
- logs_path:
logs_path: "/var/log/containers/"
filebeat.autodiscover:
providers:
- type: kubernetes
@@ -27,50 +24,24 @@ data:
type: container
paths:
- /var/log/containers/*${data.kubernetes.container.id}.log
processors:
- add_host_metadata:
- drop_fields:
fields:
- stream
ignore_missing: true
- rename:
fields:
- from: "kubernetes.node.name"
to: "source"
- from: "kubernetes.pod.name"
to: "pod"
- from: "stream"
to: "stream"
- from: "kubernetes.labels.app"
to: "app"
- from: "kubernetes.namespace"
to: "namespace"
ignore_missing: true
- drop_fields:
fields:
- agent
- container
- ecs
- host
- kubernetes
- log
- "@metadata"
ignore_missing: true
output.logstash:
hosts: ["graylog-logstash:5044"]
#output.console:
# pretty: true
output:
elasticsearch:
hosts:
- http://zinc:4080
path: "/es/"
index: "filebeat-%{+yyyy.MM.dd}"
username: "${ZINC_FIRST_ADMIN_USER}"
password: "${ZINC_FIRST_ADMIN_PASSWORD}"
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: filebeat
namespace: logging
spec:
updateStrategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 100%
maxUnavailable: 50%
selector:
matchLabels:
app: filebeat
@@ -78,72 +49,86 @@ spec:
metadata:
labels:
app: filebeat
annotations:
co.elastic.logs/json.keys_under_root: "true"
spec:
serviceAccountName: filebeat
containers:
- name: filebeat
image: docker.elastic.co/beats/filebeat:7.17.6
args:
- -c
- /etc/filebeat.yml
- -e
securityContext:
runAsUser: 0
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
ports:
- containerPort: 5066
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 100Mi
volumeMounts:
- name: filebeat-config
mountPath: /etc/filebeat.yml
readOnly: true
subPath: filebeat.yml
- name: data
mountPath: /usr/share/filebeat/data
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
- name: varlog
mountPath: /var/log
readOnly: true
- name: filebeat
image: docker.elastic.co/beats/filebeat:8.4.1
args:
- -c
- /etc/filebeat.yml
- -e
securityContext:
runAsUser: 0
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: ZINC_FIRST_ADMIN_USER
value: admin
- name: ZINC_FIRST_ADMIN_PASSWORD
value: salakala
ports:
- containerPort: 5066
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 100Mi
volumeMounts:
- name: filebeat-config
mountPath: /etc/filebeat.yml
readOnly: true
subPath: filebeat.yml
- name: data
mountPath: /usr/share/filebeat/data
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
- name: varlog
mountPath: /var/log
readOnly: true
- name: exporter
image: sepa/beats-exporter
args:
- -p=5066
ports:
- containerPort: 8080
name: exporter
protocol: TCP
volumes:
- name: filebeat-config
configMap:
defaultMode: 0600
name: filebeat-config
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
- name: varlog
hostPath:
path: /var/log
- name: data
hostPath:
path: /var/lib/filebeat-data
type: DirectoryOrCreate
- name: filebeat-config
configMap:
defaultMode: 0600
name: filebeat-config
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
- name: varlog
hostPath:
path: /var/log
- name: data
hostPath:
path: /var/lib/filebeat-data
type: DirectoryOrCreate
tolerations:
- operator: "Exists"
effect: "NoExecute"
- operator: "Exists"
effect: "NoSchedule"
- operator: "Exists"
effect: "NoExecute"
- operator: "Exists"
effect: "NoSchedule"
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: logging-filebeat
subjects:
- kind: ServiceAccount
name: filebeat
namespace: logging
- kind: ServiceAccount
name: filebeat
namespace: logging
roleRef:
kind: ClusterRole
name: filebeat
@@ -166,13 +151,35 @@ spec:
matchLabels:
app: filebeat
policyTypes:
- Ingress
- Egress
- Ingress
- Egress
ingress:
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: prometheus-operator
podSelector:
matchLabels:
app.kubernetes.io/name: prometheus
ports:
- protocol: TCP
port: 8080
egress:
- to:
- podSelector:
matchLabels:
app: graylog
ports:
- protocol: TCP
port: 5044
- to:
- podSelector:
matchLabels:
app: zinc
ports:
- protocol: TCP
port: 4080
---
apiVersion: monitoring.coreos.com/v1
kind: PodMonitor
metadata:
name: filebeat
spec:
selector:
matchLabels:
app: filebeat
podMetricsEndpoints:
- port: exporter

122
logging/zinc.yml Normal file
View File

@@ -0,0 +1,122 @@
apiVersion: v1
kind: Service
metadata:
name: zinc
spec:
clusterIP: None
selector:
app: zinc
ports:
- name: http
port: 4080
targetPort: 4080
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: zinc
spec:
serviceName: zinc
replicas: 1
selector:
matchLabels:
app: zinc
template:
metadata:
labels:
app: zinc
spec:
securityContext:
fsGroup: 2000
runAsUser: 10000
runAsGroup: 3000
runAsNonRoot: true
containers:
- name: zinc
image: public.ecr.aws/zinclabs/zinc:latest
env:
- name: GIN_MODE
value: release
- name: ZINC_FIRST_ADMIN_USER
value: admin
- name: ZINC_FIRST_ADMIN_PASSWORD
value: salakala
- name: ZINC_DATA_PATH
value: /data
imagePullPolicy: Always
resources:
limits:
cpu: "4"
memory: 4Gi
requests:
cpu: 32m
memory: 50Mi
ports:
- containerPort: 4080
name: http
volumeMounts:
- name: data
mountPath: /data
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes:
- ReadWriteOnce
storageClassName: longhorn
resources:
requests:
storage: 20Gi
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: zinc
annotations:
cert-manager.io/cluster-issuer: default
traefik.ingress.kubernetes.io/router.entrypoints: websecure
traefik.ingress.kubernetes.io/router.tls: "true"
external-dns.alpha.kubernetes.io/target: traefik.k-space.ee
traefik.ingress.kubernetes.io/router.middlewares: traefik-sso@kubernetescrd
spec:
rules:
- host: zinc.k-space.ee
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: zinc
port:
number: 4080
tls:
- hosts:
- zinc.k-space.ee
secretName: zinc-tls
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: zinc
spec:
podSelector:
matchLabels:
app: zinc
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
app: filebeat
ports:
- protocol: TCP
port: 4080
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: traefik
podSelector:
matchLabels:
app.kubernetes.io/name: traefik

View File

@@ -5,7 +5,6 @@ metadata:
namespace: longhorn-system
annotations:
kubernetes.io/ingress.class: traefik
cert-manager.io/cluster-issuer: default
external-dns.alpha.kubernetes.io/target: traefik.k-space.ee
traefik.ingress.kubernetes.io/router.entrypoints: websecure
traefik.ingress.kubernetes.io/router.middlewares: traefik-sso@kubernetescrd
@@ -24,9 +23,7 @@ spec:
number: 80
tls:
- hosts:
- longhorn.k-space.ee
secretName: longhorn-tls
- "*.k-space.ee"
---
apiVersion: monitoring.coreos.com/v1
kind: PodMonitor

View File

@@ -1056,9 +1056,6 @@ spec:
apiVersion: v1
kind: Service
metadata:
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "9500"
labels:
app: longhorn-manager
name: longhorn-backend

View File

@@ -6,11 +6,13 @@ metadata:
spec:
podSelector: {}
policyTypes:
- Egress
- Egress
egress:
- # TODO: Not sure why mysql-operator needs to be able to connect
to:
- namespaceSelector: {}
ports:
- protocol: TCP
port: 33060
- protocol: TCP
port: 3306

View File

@@ -559,10 +559,10 @@ metadata:
name: mysql-operator
namespace: mysql-operator
labels:
version: "8.0.30-2.0.5"
version: "8.0.30-2.0.6"
app.kubernetes.io/name: mysql-operator
app.kubernetes.io/instance: mysql-operator
app.kubernetes.io/version: "8.0.30-2.0.5"
app.kubernetes.io/version: "8.0.30-2.0.6"
app.kubernetes.io/component: controller
app.kubernetes.io/managed-by: helm
app.kubernetes.io/created-by: helm
@@ -578,7 +578,7 @@ spec:
spec:
containers:
- name: mysql-operator
image: mysql/mysql-operator:8.0.30-2.0.5
image: mysql/mysql-operator:8.0.30-2.0.6
imagePullPolicy: IfNotPresent
args: ["mysqlsh", "--log-level=@INFO", "--pym", "mysqloperator", "operator"]
env:

View File

@@ -26,7 +26,9 @@ spec:
- name: PMA_ARBITRARY
value: "1"
- name: PMA_HOSTS
value: mysql-cluster.etherpad.svc.cluster.local,mariadb.authelia,mariadb.nextcloud,172.20.36.1
value: mysql-cluster.authelia,mysql-cluster.etherpad,mariadb.authelia,mariadb.nextcloud,172.20.36.1
- name: PMA_PORTS
value: 6446,6446,3306,3306,3306
- name: PMA_ABSOLUTE_URI
value: https://phpmyadmin.k-space.ee/
- name: UPLOAD_LIMIT
@@ -38,7 +40,6 @@ metadata:
name: phpmyadmin
annotations:
kubernetes.io/ingress.class: traefik
cert-manager.io/cluster-issuer: default
traefik.ingress.kubernetes.io/router.entrypoints: websecure
traefik.ingress.kubernetes.io/router.middlewares: traefik-sso@kubernetescrd
traefik.ingress.kubernetes.io/router.tls: "true"
@@ -57,8 +58,7 @@ spec:
number: 80
tls:
- hosts:
- phpmyadmin.k-space.ee
secretName: phpmyadmin-tls
- "*.k-space.ee"
---
apiVersion: v1
kind: Service
@@ -98,7 +98,7 @@ spec:
to:
- namespaceSelector: {}
ports:
- port: 3306
- port: 6446
- # Allow connecting to any MySQL instance outside the cluster
to:
- ipBlock:

10
playground/README.md Normal file
View File

@@ -0,0 +1,10 @@
# Playground
Playground namespace is accessible to `Developers` AD group.
Novel log aggregator is being developer in this namespace:
```
kubectl create secret generic -n playground mongodb-application-readwrite-password --from-literal="password=$(cat /dev/urandom | base64 | head -c 30)"
kubectl create secret generic -n playground mongodb-application-readonly-password --from-literal="password=$(cat /dev/urandom | base64 | head -c 30)"
kubectl apply -n playground -f logging.yml -f mongodb-support.yml -f mongoexpress.yml -f networkpolicy-base.yml

263
playground/logging.yml Normal file
View File

@@ -0,0 +1,263 @@
---
apiVersion: mongodbcommunity.mongodb.com/v1
kind: MongoDBCommunity
metadata:
name: mongodb
spec:
additionalMongodConfig:
systemLog:
quiet: true
members: 3
type: ReplicaSet
version: "5.0.13"
security:
authentication:
modes: ["SCRAM"]
users:
- name: readwrite
db: application
passwordSecretRef:
name: mongodb-application-readwrite-password
roles:
- name: readWrite
db: application
scramCredentialsSecretName: mongodb-application-readwrite
- name: readonly
db: application
passwordSecretRef:
name: mongodb-application-readonly-password
roles:
- name: readOnly
db: application
scramCredentialsSecretName: mongodb-application-readonly
statefulSet:
spec:
logLevel: WARN
template:
spec:
containers:
- name: mongod
resources:
requests:
cpu: 100m
memory: 2Gi
limits:
cpu: 2000m
memory: 2Gi
- name: mongodb-agent
resources:
requests:
cpu: 1m
memory: 100Mi
limits: {}
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- mongodb-svc
topologyKey: kubernetes.io/hostname
nodeSelector:
dedicated: monitoring
tolerations:
- key: dedicated
operator: Equal
value: monitoring
effect: NoSchedule
volumeClaimTemplates:
- metadata:
name: logs-volume
spec:
storageClassName: local-path
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 512Mi
- metadata:
name: data-volume
spec:
storageClassName: local-path
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: log-shipper
spec:
updateStrategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 50%
selector:
matchLabels:
app: log-shipper
template:
metadata:
labels:
app: log-shipper
spec:
serviceAccountName: log-shipper
containers:
- name: log-shipper
image: harbor.k-space.ee/k-space/log-shipper
securityContext:
runAsUser: 0
env:
- name: MY_POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: MONGODB_HOST
valueFrom:
secretKeyRef:
name: mongodb-application-readwrite
key: connectionString.standard
ports:
- containerPort: 8000
name: metrics
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 100Mi
volumeMounts:
- name: etcmachineid
mountPath: /etc/machine-id
readOnly: true
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
- name: varlog
mountPath: /var/log
readOnly: true
volumes:
- name: etcmachineid
hostPath:
path: /etc/machine-id
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
- name: varlog
hostPath:
path: /var/log
tolerations:
- operator: "Exists"
effect: "NoExecute"
- operator: "Exists"
effect: "NoSchedule"
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: logging-log-shipper
subjects:
- kind: ServiceAccount
name: log-shipper
namespace: playground
roleRef:
kind: ClusterRole
name: filebeat
apiGroup: rbac.authorization.k8s.io
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: log-shipper
labels:
app: log-shipper
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: log-shipper
spec:
podSelector:
matchLabels:
app: log-shipper
policyTypes:
- Ingress
- Egress
ingress:
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: prometheus-operator
podSelector:
matchLabels:
app.kubernetes.io/name: prometheus
egress:
- to:
- podSelector:
matchLabels:
app: mongodb-svc
ports:
- port: 27017
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: log-viewer-backend
spec:
podSelector:
matchLabels:
app: log-viewer-backend
policyTypes:
- Ingress
- Egress
egress:
- to:
- podSelector:
matchLabels:
app: mongodb-svc
ingress:
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: traefik
podSelector:
matchLabels:
app.kubernetes.io/name: traefik
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: log-viewer-frontend
spec:
podSelector:
matchLabels:
app: log-viewer-frontend
policyTypes:
- Ingress
- Egress
ingress:
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: traefik
podSelector:
matchLabels:
app.kubernetes.io/name: traefik
---
apiVersion: monitoring.coreos.com/v1
kind: PodMonitor
metadata:
name: log-shipper
spec:
selector:
matchLabels:
app: log-shipper
podMetricsEndpoints:
- port: metrics

1
playground/mongoexpress.yml Symbolic link
View File

@@ -0,0 +1 @@
../shared/mongoexpress.yml

View File

@@ -0,0 +1 @@
../shared/networkpolicy-base.yml

View File

@@ -9,7 +9,16 @@ kubectl create -n prometheus-operator configmap snmp-exporter --from-file=snmp.y
kubectl apply -n prometheus-operator -f application.yml -f node-exporter.yml -f blackbox-exporter.yml -f snmp-exporter.yml -f mikrotik-exporter.yml
```
# Mikrotik expoeter
# Slack
```
kubectl create -n prometheus-operator secret generic slack-secrets \
--from-literal=webhook-url=https://hooks.slack.com/services/...
```
# Mikrotik exporter
```
kubectl create -n prometheus-operator secret generic mikrotik-exporter \

View File

@@ -1,4 +1,22 @@
---
apiVersion: monitoring.coreos.com/v1alpha1
kind: AlertmanagerConfig
metadata:
name: alertmanager
labels:
app.kubernetes.io/name: alertmanager
spec:
route:
receiver: 'slack-notifications'
receivers:
- name: 'slack-notifications'
slackConfigs:
- channel: '#kube-prod'
sendResolved: true
apiURL:
name: slack-secrets
key: webhook-url
---
apiVersion: monitoring.coreos.com/v1
kind: PodMonitor
metadata:
@@ -15,6 +33,11 @@ kind: Alertmanager
metadata:
name: alertmanager
spec:
alertmanagerConfigSelector:
matchLabels:
app.kubernetes.io/name: alertmanager
secrets:
- slack-secrets
nodeSelector:
dedicated: monitoring
tolerations:
@@ -52,10 +75,8 @@ spec:
alerting:
alertmanagers:
- namespace: prometheus-operator
name: alertmanager
port: http
pathPrefix: "/"
apiVersion: v2
name: alertmanager-operated
port: web
externalUrl: "http://prom.k-space.ee/"
replicas: 2
shards: 1
@@ -378,7 +399,6 @@ kind: Ingress
metadata:
name: prometheus
annotations:
cert-manager.io/cluster-issuer: default
traefik.ingress.kubernetes.io/router.entrypoints: websecure
traefik.ingress.kubernetes.io/router.tls: "true"
external-dns.alpha.kubernetes.io/target: traefik.k-space.ee
@@ -397,15 +417,13 @@ spec:
number: 9090
tls:
- hosts:
- prom.k-space.ee
secretName: prom-tls
- "*.k-space.ee"
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: alertmanager
annotations:
cert-manager.io/cluster-issuer: default
traefik.ingress.kubernetes.io/router.entrypoints: websecure
traefik.ingress.kubernetes.io/router.tls: "true"
external-dns.alpha.kubernetes.io/target: traefik.k-space.ee
@@ -424,8 +442,7 @@ spec:
number: 9093
tls:
- hosts:
- am.k-space.ee
secretName: alertmanager-tls
- "*.k-space.ee"
---
apiVersion: monitoring.coreos.com/v1
kind: PodMonitor
@@ -487,276 +504,3 @@ spec:
selector:
matchLabels:
app.kubernetes.io/name: kubelet
---
apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
name: kube-state-metrics
spec:
groups:
- name: kube-state-metrics
rules:
- alert: KubernetesNodeReady
expr: kube_node_status_condition{condition="Ready",status="true"} == 0
for: 10m
labels:
severity: critical
annotations:
summary: Kubernetes Node ready (instance {{ $labels.instance }})
description: "Node {{ $labels.node }} has been unready for a long time\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- alert: KubernetesMemoryPressure
expr: kube_node_status_condition{condition="MemoryPressure",status="true"} == 1
for: 2m
labels:
severity: critical
annotations:
summary: Kubernetes memory pressure (instance {{ $labels.instance }})
description: "{{ $labels.node }} has MemoryPressure condition\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- alert: KubernetesDiskPressure
expr: kube_node_status_condition{condition="DiskPressure",status="true"} == 1
for: 2m
labels:
severity: critical
annotations:
summary: Kubernetes disk pressure (instance {{ $labels.instance }})
description: "{{ $labels.node }} has DiskPressure condition\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- alert: KubernetesOutOfDisk
expr: kube_node_status_condition{condition="OutOfDisk",status="true"} == 1
for: 2m
labels:
severity: critical
annotations:
summary: Kubernetes out of disk (instance {{ $labels.instance }})
description: "{{ $labels.node }} has OutOfDisk condition\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- alert: KubernetesOutOfCapacity
expr: sum by (node) ((kube_pod_status_phase{phase="Running"} == 1) + on(uid) group_left(node) (0 * kube_pod_info{pod_template_hash=""})) / sum by (node) (kube_node_status_allocatable{resource="pods"}) * 100 > 90
for: 2m
labels:
severity: warning
annotations:
summary: Kubernetes out of capacity (instance {{ $labels.instance }})
description: "{{ $labels.node }} is out of capacity\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- alert: KubernetesContainerOomKiller
expr: (kube_pod_container_status_restarts_total - kube_pod_container_status_restarts_total offset 10m >= 1) and ignoring (reason) min_over_time(kube_pod_container_status_last_terminated_reason{reason="OOMKilled"}[10m]) == 1
for: 0m
labels:
severity: warning
annotations:
summary: Kubernetes container oom killer (instance {{ $labels.instance }})
description: "Container {{ $labels.container }} in pod {{ $labels.namespace }}/{{ $labels.pod }} has been OOMKilled {{ $value }} times in the last 10 minutes.\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- alert: KubernetesJobFailed
expr: kube_job_status_failed > 0
for: 0m
labels:
severity: warning
annotations:
summary: Kubernetes Job failed (instance {{ $labels.instance }})
description: "Job {{$labels.namespace}}/{{$labels.exported_job}} failed to complete\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- alert: KubernetesCronjobSuspended
expr: kube_cronjob_spec_suspend != 0
for: 0m
labels:
severity: warning
annotations:
summary: Kubernetes CronJob suspended (instance {{ $labels.instance }})
description: "CronJob {{ $labels.namespace }}/{{ $labels.cronjob }} is suspended\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- alert: KubernetesPersistentvolumeclaimPending
expr: kube_persistentvolumeclaim_status_phase{phase="Pending"} == 1
for: 2m
labels:
severity: warning
annotations:
summary: Kubernetes PersistentVolumeClaim pending (instance {{ $labels.instance }})
description: "PersistentVolumeClaim {{ $labels.namespace }}/{{ $labels.persistentvolumeclaim }} is pending\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- alert: KubernetesVolumeOutOfDiskSpace
expr: kubelet_volume_stats_available_bytes / kubelet_volume_stats_capacity_bytes * 100 < 10
for: 2m
labels:
severity: warning
annotations:
summary: Kubernetes Volume out of disk space (instance {{ $labels.instance }})
description: "Volume is almost full (< 10% left)\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- alert: KubernetesVolumeFullInFourDays
expr: predict_linear(kubelet_volume_stats_available_bytes[6h], 4 * 24 * 3600) < 0
for: 0m
labels:
severity: critical
annotations:
summary: Kubernetes Volume full in four days (instance {{ $labels.instance }})
description: "{{ $labels.namespace }}/{{ $labels.persistentvolumeclaim }} is expected to fill up within four days. Currently {{ $value | humanize }}% is available.\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- alert: KubernetesPersistentvolumeError
expr: kube_persistentvolume_status_phase{phase=~"Failed|Pending", job="kube-state-metrics"} > 0
for: 0m
labels:
severity: critical
annotations:
summary: Kubernetes PersistentVolume error (instance {{ $labels.instance }})
description: "Persistent volume is in bad state\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- alert: KubernetesStatefulsetDown
expr: (kube_statefulset_status_replicas_ready / kube_statefulset_status_replicas_current) != 1
for: 1m
labels:
severity: critical
annotations:
summary: Kubernetes StatefulSet down (instance {{ $labels.instance }})
description: "A StatefulSet went down\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- alert: KubernetesHpaScalingAbility
expr: kube_horizontalpodautoscaler_status_condition{status="false", condition="AbleToScale"} == 1
for: 2m
labels:
severity: warning
annotations:
summary: Kubernetes HPA scaling ability (instance {{ $labels.instance }})
description: "Pod is unable to scale\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- alert: KubernetesHpaMetricAvailability
expr: kube_horizontalpodautoscaler_status_condition{status="false", condition="ScalingActive"} == 1
for: 0m
labels:
severity: warning
annotations:
summary: Kubernetes HPA metric availability (instance {{ $labels.instance }})
description: "HPA is not able to collect metrics\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- alert: KubernetesHpaScaleCapability
expr: kube_horizontalpodautoscaler_status_desired_replicas >= kube_horizontalpodautoscaler_spec_max_replicas
for: 2m
labels:
severity: info
annotations:
summary: Kubernetes HPA scale capability (instance {{ $labels.instance }})
description: "The maximum number of desired Pods has been hit\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- alert: KubernetesPodNotHealthy
expr: min_over_time(sum by (namespace, pod) (kube_pod_status_phase{phase=~"Pending|Unknown|Failed"})[15m:1m]) > 0
for: 0m
labels:
severity: critical
annotations:
summary: Kubernetes Pod not healthy (instance {{ $labels.instance }})
description: "Pod has been in a non-ready state for longer than 15 minutes.\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- alert: KubernetesPodCrashLooping
expr: increase(kube_pod_container_status_restarts_total[1m]) > 3
for: 2m
labels:
severity: warning
annotations:
summary: Kubernetes pod crash looping (instance {{ $labels.instance }})
description: "Pod {{ $labels.pod }} is crash looping\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- alert: KubernetesReplicassetMismatch
expr: kube_replicaset_spec_replicas != kube_replicaset_status_ready_replicas
for: 10m
labels:
severity: warning
annotations:
summary: Kubernetes ReplicasSet mismatch (instance {{ $labels.instance }})
description: "Deployment Replicas mismatch\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- alert: KubernetesDeploymentReplicasMismatch
expr: kube_deployment_spec_replicas != kube_deployment_status_replicas_available
for: 10m
labels:
severity: warning
annotations:
summary: Kubernetes Deployment replicas mismatch (instance {{ $labels.instance }})
description: "Deployment Replicas mismatch\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- alert: KubernetesStatefulsetReplicasMismatch
expr: kube_statefulset_status_replicas_ready != kube_statefulset_status_replicas
for: 10m
labels:
severity: warning
annotations:
summary: Kubernetes StatefulSet replicas mismatch (instance {{ $labels.instance }})
description: "A StatefulSet does not match the expected number of replicas.\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- alert: KubernetesDeploymentGenerationMismatch
expr: kube_deployment_status_observed_generation != kube_deployment_metadata_generation
for: 10m
labels:
severity: critical
annotations:
summary: Kubernetes Deployment generation mismatch (instance {{ $labels.instance }})
description: "A Deployment has failed but has not been rolled back.\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- alert: KubernetesStatefulsetGenerationMismatch
expr: kube_statefulset_status_observed_generation != kube_statefulset_metadata_generation
for: 10m
labels:
severity: critical
annotations:
summary: Kubernetes StatefulSet generation mismatch (instance {{ $labels.instance }})
description: "A StatefulSet has failed but has not been rolled back.\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- alert: KubernetesStatefulsetUpdateNotRolledOut
expr: max without (revision) (kube_statefulset_status_current_revision unless kube_statefulset_status_update_revision) * (kube_statefulset_replicas != kube_statefulset_status_replicas_updated)
for: 10m
labels:
severity: warning
annotations:
summary: Kubernetes StatefulSet update not rolled out (instance {{ $labels.instance }})
description: "StatefulSet update has not been rolled out.\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- alert: KubernetesDaemonsetRolloutStuck
expr: kube_daemonset_status_number_ready / kube_daemonset_status_desired_number_scheduled * 100 < 100 or kube_daemonset_status_desired_number_scheduled - kube_daemonset_status_current_number_scheduled > 0
for: 10m
labels:
severity: warning
annotations:
summary: Kubernetes DaemonSet rollout stuck (instance {{ $labels.instance }})
description: "Some Pods of DaemonSet are not scheduled or not ready\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- alert: KubernetesDaemonsetMisscheduled
expr: kube_daemonset_status_number_misscheduled > 0
for: 1m
labels:
severity: critical
annotations:
summary: Kubernetes DaemonSet misscheduled (instance {{ $labels.instance }})
description: "Some DaemonSet Pods are running where they are not supposed to run\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- alert: KubernetesCronjobTooLong
expr: time() - kube_cronjob_next_schedule_time > 3600
for: 0m
labels:
severity: warning
annotations:
summary: Kubernetes CronJob too long (instance {{ $labels.instance }})
description: "CronJob {{ $labels.namespace }}/{{ $labels.cronjob }} is taking more than 1h to complete.\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- alert: KubernetesJobSlowCompletion
expr: kube_job_spec_completions - kube_job_status_succeeded > 0
for: 12h
labels:
severity: critical
annotations:
summary: Kubernetes job slow completion (instance {{ $labels.instance }})
description: "Kubernetes Job {{ $labels.namespace }}/{{ $labels.job_name }} did not complete in time.\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- alert: KubernetesApiServerErrors
expr: sum(rate(apiserver_request_total{job="apiserver",code=~"^(?:5..)$"}[1m])) / sum(rate(apiserver_request_total{job="apiserver"}[1m])) * 100 > 3
for: 2m
labels:
severity: critical
annotations:
summary: Kubernetes API server errors (instance {{ $labels.instance }})
description: "Kubernetes API server is experiencing high error rate\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- alert: KubernetesApiClientErrors
expr: (sum(rate(rest_client_requests_total{code=~"(4|5).."}[1m])) by (instance, job) / sum(rate(rest_client_requests_total[1m])) by (instance, job)) * 100 > 1
for: 2m
labels:
severity: critical
annotations:
summary: Kubernetes API client errors (instance {{ $labels.instance }})
description: "Kubernetes API client is experiencing high error rate\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- alert: KubernetesClientCertificateExpiresNextWeek
expr: apiserver_client_certificate_expiration_seconds_count{job="apiserver"} > 0 and histogram_quantile(0.01, sum by (job, le) (rate(apiserver_client_certificate_expiration_seconds_bucket{job="apiserver"}[5m]))) < 7*24*60*60
for: 0m
labels:
severity: warning
annotations:
summary: Kubernetes client certificate expires next week (instance {{ $labels.instance }})
description: "A client certificate used to authenticate to the apiserver is expiring next week.\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- alert: KubernetesClientCertificateExpiresSoon
expr: apiserver_client_certificate_expiration_seconds_count{job="apiserver"} > 0 and histogram_quantile(0.01, sum by (job, le) (rate(apiserver_client_certificate_expiration_seconds_bucket{job="apiserver"}[5m]))) < 24*60*60
for: 0m
labels:
severity: critical
annotations:
summary: Kubernetes client certificate expires soon (instance {{ $labels.instance }})
description: "A client certificate used to authenticate to the apiserver is expiring in less than 24.0 hours.\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- alert: KubernetesApiServerLatency
expr: histogram_quantile(0.99, sum(rate(apiserver_request_latencies_bucket{subresource!="log",verb!~"^(?:CONNECT|WATCHLIST|WATCH|PROXY)$"} [10m])) WITHOUT (instance, resource)) / 1e+06 > 1
for: 2m
labels:
severity: warning
annotations:
summary: Kubernetes API server latency (instance {{ $labels.instance }})
description: "Kubernetes API server has a 99th percentile latency of {{ $value }} seconds for {{ $labels.verb }} {{ $labels.resource }}.\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"

View File

@@ -156,7 +156,7 @@ metadata:
name: blackbox-exporter
spec:
revisionHistoryLimit: 0
replicas: 2
replicas: 3
selector:
matchLabels:
app: blackbox-exporter

View File

@@ -366,7 +366,9 @@ spec:
app: node-exporter
podMetricsEndpoints:
- port: web
scrapeTimeout: 30s
relabelings:
- sourceLabels: [__meta_kubernetes_pod_node_name]
targetLabel: node
---
apiVersion: v1
kind: ServiceAccount
@@ -429,6 +431,7 @@ spec:
readOnlyRootFilesystem: true
hostNetwork: true
hostPID: true
priorityClassName: system-node-critical
securityContext:
runAsNonRoot: true
runAsUser: 65534

View File

@@ -79,12 +79,15 @@ spec:
prober:
url: snmp-exporter:9116
path: /snmp
metricRelabelings:
- sourceLabels: [__name__]
regex: '(.*)'
replacement: 'snmp_${1}'
targetLabel: __name__
targets:
staticConfig:
static:
- ups-4.mgmt.k-space.ee
- ups-5.mgmt.k-space.ee
- ups-6.mgmt.k-space.ee
- ups-7.mgmt.k-space.ee
- ups-8.mgmt.k-space.ee
- ups-9.mgmt.k-space.ee
@@ -108,7 +111,7 @@ spec:
annotations:
summary: One or more UPS-es is not in normal operation mode. This either means
power is lost or UPS was loaded and it's now in bypass mode.
expr: sum(snmp_upsOutputSource { upsOutputSource = 'normal' }) < 6
expr: sum(snmp_upsOutputSource { upsOutputSource = 'normal' }) != 4
for: 1m
labels:
severity: critical
@@ -132,6 +135,11 @@ spec:
prober:
url: snmp-exporter:9116
path: /snmp
metricRelabelings:
- sourceLabels: [__name__]
regex: '(.*)'
replacement: 'snmp_${1}'
targetLabel: __name__
targets:
staticConfig:
static:
@@ -166,6 +174,11 @@ spec:
prober:
url: snmp-exporter:9116
path: /snmp
metricRelabelings:
- sourceLabels: [__name__]
regex: '(.*)'
replacement: 'snmp_${1}'
targetLabel: __name__
targets:
staticConfig:
static:

View File

@@ -33,6 +33,7 @@ epson_beamer:
type: gauge
printer_mib:
version: 1
walk:
- 1.3.6.1.2.1.25.3.5.1.1
- 1.3.6.1.2.1.43.11.1.1.5

View File

@@ -5,5 +5,6 @@ Calico implements the inter-pod overlay network
```
curl https://projectcalico.docs.tigera.io/manifests/tigera-operator.yaml -O
curl https://projectcalico.docs.tigera.io/manifests/custom-resources.yaml -O
kubectl apply -f tigera-operator.yaml -f custom-resources.yaml
kubectl apply -f custom-resources.yaml
kubectl replace -f tigera-operator.yaml
```

View File

@@ -1,64 +0,0 @@
#!/bin/bash
NAMESPACE=${NAMESPACE:-longhorn-system}
remove_and_wait() {
local crd=$1
out=`kubectl -n ${NAMESPACE} delete $crd --all 2>&1`
if [ $? -ne 0 ]; then
echo $out
return
fi
while true; do
out=`kubectl -n ${NAMESPACE} get $crd -o yaml | grep 'items: \[\]'`
if [ $? -eq 0 ]; then
break
fi
sleep 1
done
echo all $crd instances deleted
}
remove_crd_instances() {
remove_and_wait volumes.longhorn.rancher.io
# TODO: remove engines and replicas once we fix https://github.com/rancher/longhorn/issues/273
remove_and_wait engines.longhorn.rancher.io
remove_and_wait replicas.longhorn.rancher.io
remove_and_wait engineimages.longhorn.rancher.io
remove_and_wait settings.longhorn.rancher.io
# do this one last; manager crashes
remove_and_wait nodes.longhorn.rancher.io
}
# Delete driver related workloads in specific order
remove_driver() {
kubectl -n ${NAMESPACE} delete deployment.apps/longhorn-driver-deployer
kubectl -n ${NAMESPACE} delete daemonset.apps/longhorn-csi-plugin
kubectl -n ${NAMESPACE} delete statefulset.apps/csi-attacher
kubectl -n ${NAMESPACE} delete service/csi-attacher
kubectl -n ${NAMESPACE} delete statefulset.apps/csi-provisioner
kubectl -n ${NAMESPACE} delete service/csi-provisioner
kubectl -n ${NAMESPACE} delete daemonset.apps/longhorn-flexvolume-driver
}
# Delete all workloads in the namespace
remove_workloads() {
kubectl -n ${NAMESPACE} get daemonset.apps -o yaml | kubectl delete -f -
kubectl -n ${NAMESPACE} get deployment.apps -o yaml | kubectl delete -f -
kubectl -n ${NAMESPACE} get replicaset.apps -o yaml | kubectl delete -f -
kubectl -n ${NAMESPACE} get statefulset.apps -o yaml | kubectl delete -f -
kubectl -n ${NAMESPACE} get pods -o yaml | kubectl delete -f -
kubectl -n ${NAMESPACE} get service -o yaml | kubectl delete -f -
}
# Delete CRD definitions with longhorn.rancher.io in the name
remove_crds() {
for crd in $(kubectl get crd -o jsonpath={.items[*].metadata.name} | tr ' ' '\n' | grep longhorn.rancher.io); do
kubectl delete crd/$crd
done
}
remove_crd_instances
remove_driver
remove_workloads
remove_crds

View File

@@ -1,5 +1,5 @@
# This section includes base Calico installation configuration.
# For more information, see: https://projectcalico.docs.tigera.io/v3.23/reference/installation/api#operator.tigera.io/v1.Installation
# For more information, see: https://projectcalico.docs.tigera.io/master/reference/installation/api#operator.tigera.io/v1.Installation
apiVersion: operator.tigera.io/v1
kind: Installation
metadata:
@@ -10,7 +10,7 @@ spec:
# Note: The ipPools section cannot be modified post-install.
ipPools:
- blockSize: 26
cidr: 192.168.0.0/16
cidr: 10.244.0.0/16
encapsulation: VXLANCrossSubnet
natOutgoing: Enabled
nodeSelector: all()
@@ -18,7 +18,7 @@ spec:
---
# This section configures the Calico API server.
# For more information, see: https://projectcalico.docs.tigera.io/v3.23/reference/installation/api#operator.tigera.io/v1.APIServer
# For more information, see: https://projectcalico.docs.tigera.io/master/reference/installation/api#operator.tigera.io/v1.APIServer
apiVersion: operator.tigera.io/v1
kind: APIServer
metadata:

File diff suppressed because it is too large Load Diff

View File

@@ -64,8 +64,16 @@ spec:
number: 9000
tls:
- hosts:
- traefik.k-space.ee
secretName: traefik-tls
- "*.k-space.ee"
secretName: wildcard-tls
---
apiVersion: traefik.containo.us/v1alpha1
kind: TLSStore
metadata:
name: default
spec:
defaultCertificate:
secretName: wildcard-tls
---
apiVersion: traefik.containo.us/v1alpha1
kind: Middleware

View File

@@ -1,3 +1,34 @@
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: proxmox
spec:
entryPoints:
- https
routes:
- kind: Rule
match: Host(`pve.k-space.ee`)
priority: 10
middlewares:
- name: proxmox-redirect
- name: traefik-sso@kubernetescrd
- name: traefik-proxmox-redirect@kubernetescrd
services:
- kind: Service
name: pve1
passHostHeader: true
port: 8006
responseForwarding:
flushInterval: 1ms
scheme: https
serversTransport: proxmox-servers-transport
tls:
secretName: pve
domains:
- main: pve.k-space.ee
sans:
- "*.k-space.ee"
apiVersion: traefik.containo.us/v1alpha1
kind: ServersTransport
metadata:
@@ -56,101 +87,6 @@ data:
RWRmRHIzNTBpZkRCQkVuL3RvL3JUczFOVjhyOGpjcG14a2MzNjlSQXp3TmJiRVkKMVE9PQotLS0t
LUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
---
apiVersion: v1
kind: Service
metadata:
name: pve1
annotations:
traefik.ingress.kubernetes.io/service.serverstransport: traefik-proxmox-servers-transport@kubernetescrd
spec:
type: ExternalName
externalName: pve1.proxmox.infra.k-space.ee
ports:
- name: https
port: 8006
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
name: pve8
annotations:
traefik.ingress.kubernetes.io/service.serverstransport: traefik-proxmox-servers-transport@kubernetescrd
spec:
type: ExternalName
externalName: pve8.proxmox.infra.k-space.ee
ports:
- name: https
port: 8006
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
name: pve9
annotations:
traefik.ingress.kubernetes.io/service.serverstransport: traefik-proxmox-servers-transport@kubernetescrd
spec:
type: ExternalName
externalName: pve9.proxmox.infra.k-space.ee
ports:
- name: https
port: 8006
protocol: TCP
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: pve
annotations:
kubernetes.io/ingress.class: traefik
cert-manager.io/cluster-issuer: default
external-dns.alpha.kubernetes.io/target: traefik.k-space.ee
traefik.ingress.kubernetes.io/router.entrypoints: websecure
traefik.ingress.kubernetes.io/router.middlewares: traefik-sso@kubernetescrd,traefik-proxmox-redirect@kubernetescrd
traefik.ingress.kubernetes.io/router.tls: "true"
spec:
rules:
- host: proxmox.k-space.ee
http:
paths:
- pathType: Prefix
path: /
backend:
service:
name: whoami
port:
number: 80
- host: pve.k-space.ee
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: pve1
port:
number: 8006
- pathType: Prefix
path: "/"
backend:
service:
name: pve8
port:
number: 8006
- pathType: Prefix
path: "/"
backend:
service:
name: pve9
port:
number: 8006
tls:
- hosts:
- pve.k-space.ee
- proxmox.k-space.ee
secretName: pve-tls
---
apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:

View File

@@ -1,13 +1,36 @@
image:
tag: "2.8"
tag: "2.9"
websecure:
tls:
enabled: true
providers:
kubernetesCRD:
enabled: true
namespaces:
- traefik
- authelia
kubernetesIngress:
allowEmptyServices: true
allowExternalNameServices: true
namespaces:
- argocd
- authelia
- camtiler
- drone
- elastic-system
- etherpad
- freescout
- grafana
- harbor
- kubernetes-dashboard
- logging
- longhorn-system
- phpmyadmin
- prometheus-operator
- wildduck
deployment:
replicas: 2

View File

@@ -17,7 +17,6 @@ metadata:
name: voron
annotations:
kubernetes.io/ingress.class: traefik
cert-manager.io/cluster-issuer: default
traefik.ingress.kubernetes.io/router.entrypoints: websecure
traefik.ingress.kubernetes.io/router.middlewares: traefik-sso@kubernetescrd
traefik.ingress.kubernetes.io/router.tls: "true"
@@ -36,5 +35,4 @@ spec:
name: http
tls:
- hosts:
- voron.k-space.ee
secretName: voron-tls
- "*.k-space.ee"

View File

@@ -41,7 +41,6 @@ kind: Ingress
metadata:
name: whoami
annotations:
cert-manager.io/cluster-issuer: default
external-dns.alpha.kubernetes.io/target: traefik.k-space.ee
kubernetes.io/ingress.class: traefik
traefik.ingress.kubernetes.io/router.entrypoints: websecure
@@ -50,8 +49,7 @@ metadata:
spec:
tls:
- hosts:
- "whoami.k-space.ee"
secretName: whoami-tls
- "*.k-space.ee"
rules:
- host: "whoami.k-space.ee"
http:

View File

@@ -104,7 +104,6 @@ metadata:
namespace: wildduck
annotations:
kubernetes.io/ingress.class: traefik
cert-manager.io/cluster-issuer: default
traefik.ingress.kubernetes.io/router.entrypoints: websecure
traefik.ingress.kubernetes.io/router.middlewares: traefik-sso@kubernetescrd
traefik.ingress.kubernetes.io/router.tls: "true"
@@ -123,8 +122,7 @@ spec:
number: 80
tls:
- hosts:
- webmail.k-space.ee
secretName: webmail-tls
- "*.k-space.ee"
---
apiVersion: codemowers.io/v1alpha1
kind: KeyDBCluster