Compare commits

..

39 Commits
awx ... main

Author SHA1 Message Date
Ibrahim Mkusa
3e9451276e separate core cluster functions from regular services 2025-06-20 09:22:12 -04:00
Ibrahim Mkusa
0f5cc78d91 syncthing install playbook 2025-06-20 09:16:08 -04:00
Ibrahim Mkusa
29daa4153e separate seed and main cluster IP pool + fixes 2025-06-20 08:10:35 -04:00
Ibrahim Mkusa
2cb1b29aa7 use k8s as ingress for external cluster services 2025-06-08 21:29:06 -04:00
Ibrahim Mkusa
a371e3d9cd harbor 1.12 as container registry alternative to gitea 2025-06-07 21:39:34 -04:00
Ibrahim Mkusa
dc98b8a8ba mirror personal webservice within the cluster 2025-06-06 17:16:11 -04:00
Ibrahim Mkusa
9f77404f31 adjust request limits;gitea CA -> prod 2025-06-06 15:38:59 -04:00
Ibrahim Mkusa
72675b14d0 middleman be gone;welcome local ingress 2025-06-06 14:27:51 -04:00
Ibrahim Mkusa
e85857a329 Cloudflare tunnel for k8s 2025-06-04 22:12:50 -04:00
Ibrahim Mkusa
59cf977fec add gitea app 1.23.8 2025-06-04 21:56:20 -04:00
Ibrahim Mkusa
825844cac8 add valkey for clusterwide session/cache 2025-06-03 19:19:24 -04:00
Ibrahim Mkusa
7956524b1e Deploy cloudnative-pg for postgresql HA deployment 2025-06-03 18:55:33 -04:00
Ibrahim Mkusa
3102ad4a31 Add kube-prometheus-stack starting v0.82.2 2025-06-02 19:57:31 -04:00
Ibrahim Mkusa
cf9eada196 argo-cd upgrade v2.13.2 --> v3.0.5 2025-06-01 14:34:19 -04:00
Ibrahim Mkusa
f37e839267 cert-manager upgrade v1.16.1 --> v1.17.2 2025-05-31 22:24:18 -04:00
Ibrahim Mkusa
39f39732e3 longhorn requirements install on deb family 2025-05-30 19:53:23 -04:00
Ibrahim Mkusa
b2f8a8e885 longhorn upgrade v1.8.1 --> 1.9.0 2025-05-30 19:19:14 -04:00
Ibrahim Mkusa
c33beb42dd externaldns upgrade v0.15.0 to v0.17.0 + multi-pihole ops 2025-05-30 18:38:55 -04:00
Ibrahim Mkusa
04f9d7fedf ingress-nginx upgrade 1.12-0-beta --> 1.12.2 2025-05-30 17:44:32 -04:00
Ibrahim Mkusa
35555b4233 reorg + pihole updates 2025-05-30 16:35:39 -04:00
Ibrahim Mkusa
aa0a454665 pihole update 2.27.0 --> 2.31.0 2025-05-27 22:22:34 -04:00
Ibrahim Mkusa
3ec8286476 Upgrade longhorn 1.7.2 --> 1.8.1 2025-04-21 09:32:49 -04:00
Ibrahim Mkusa
dc09613b2e reorg, consolidate playbooks/k8s and clean up 2025-04-20 14:07:20 -04:00
Ibrahim Mkusa
c94147754f expand IP pool for metallb to max of 24 2025-04-20 13:21:53 -04:00
Ibrahim Mkusa
83c2b8e335 clean up inventory post r730xd migration 2025-04-20 12:56:19 -04:00
Ibrahim Mkusa
e00e5c34e4 disable host key checking, update inventory 2025-01-05 13:09:23 -05:00
Ibrahim Mkusa
50c0bc1e79 integrate jellyfin with external-dns + nfs updates 2025-01-01 12:31:38 -05:00
Ibrahim Mkusa
be9b3994a7 manifests -> helm charts for argocd 2024-12-31 23:05:31 -05:00
Ibrahim Mkusa
6921862a3c external-dns annotation for pihole 2024-12-31 22:13:08 -05:00
Ibrahim Mkusa
e5e4382322 external-dns annotation for longhorn 2024-12-31 21:31:28 -05:00
Ibrahim Mkusa
b51c7dff6d Internal pihole ip refresh 2024-12-31 20:37:09 -05:00
Ibrahim Mkusa
5e7668b59f upgrade pihole helm chart v0.26 -> v0.27 2024-12-31 19:17:44 -05:00
Ibrahim Mkusa
98dd2bd16f upgrade metallb v0.14.0 -> v0.14.9 2024-12-31 18:58:30 -05:00
Ibrahim Mkusa
3b41466b79 selinux policy exceptions(for rpm hosts) during install 2024-12-03 13:05:56 -05:00
Ibrahim Mkusa
7a62c14784 github_runner role works on redhat: selinux-fu 2024-12-02 23:49:17 -05:00
Ibrahim Mkusa
2dff4e3921 role to setup machine as a github runner.(working) 2024-12-02 18:33:42 -05:00
Ibrahim Mkusa
234a69f50a reorg docs for argocd 2024-11-21 12:54:59 -05:00
Ibrahim Mkusa
7e521afe6b Integrate kube-external-dns with pihole for autonomous dns 2024-11-20 20:54:55 -05:00
Ibrahim Mkusa
8f89896394
Merge pull request #1 from iskm/awx
shift priviledge escalation away from the execution environment
2024-11-18 19:21:59 -05:00
1167 changed files with 215282 additions and 8705 deletions

2
.gitignore vendored
View File

@ -3,3 +3,5 @@ roles/*
collections/*
bind.conf.j2
dhcpd.conf.j2
!roles/github_runner
awx-operator

View File

@ -1,5 +1,7 @@
[defaults]
inventory=./inventory
inventory=./inventory/inventory.ini
remote_user="ansible"
ansible_user="ansible"
roles_path=./roles
collections_path=./collections
host_key_checking = False

View File

@ -1,42 +0,0 @@
[control]
localhost
[dns]
192.168.2.236
[docker]
docker0 ansible_user=ansible
[aws]
aws ansible_user=ubuntu
[helm]
node1 ansible_user=ansible
node4 ansible_user=ansible
[terraform]
node1
[dhcp]
nodex
[doc]
doc ansible_user=pollen
[cluster]
node1 ansible_user=ansible
node2 ansible_user=ansible
node3 ansible_user=ansible
node4 ansible_user=ansible
node5 ansible_user=ansible
node6 ansible_user=ansible
[windows]
bane ansible_user=ansible ansible_connection=winrm ansible_winrm_transport=basic ansible_port=5985 ansible_winrm_server_cert_validation=ignore
baxter
[servers:children]
doc
aws

12
inventory/inventory.ini Normal file
View File

@ -0,0 +1,12 @@
t440
baxter
pve0 ansible_user=root
rhel0
docker0
doc ansible_user=pollen
node0
node1
node2
node3
node4
node5

29
manifests/TIPS.md Normal file
View File

@ -0,0 +1,29 @@
On a fresh cluster, the order of installation is usually `metallb` -> `pihole`
-> your favorite ingress controller-> external-dns -> longhorn(install
pre-reqs) -> cert-manager -> other services
Installing `pihole` via helm, it will look for its password in a secret named
`pihole-dashboard-password` in the same namespace. You can create it like so
```
kubectl create secret generic pihole-dashboard-password
--from-literal=password=XXXXXXXXX
```
Of course, substituting XXXXXX for your actual password. If you care, make sure
actual encryption at rest is enabled for your passwords.
Make sure to update your main LAN dns servers to point to pihole. 2 instances
for redundancy are highly encouraged
Installing externaldns via manifests, it will look for its password in a secret named
`pihole-externaldns-password` in the same namespace. You can create it like so
```
kubectl create secret generic pihole-dashboard-password
--from-literal=EXTERNAL_DNS_PIHOLE_PASSWORD=XXXXXXXXX
```
If possible, create a secret resource during setup rather than hardcoding
passwords otherwise there's a good chance of it being committed unawares

View File

@ -1,11 +0,0 @@
---
apiVersion: awx.ansible.com/v1beta1
kind: AWX
metadata:
name: awx-maabara
spec:
service_type: LoadBalancer
postgres_data_volume_init: true
postgres_init_container_commands: |
chown 26:0 /var/lib/pgsql/data
chmod 700 /var/lib/pgsql/data

View File

@ -1,14 +0,0 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
# Find the latest tag here: https://github.com/ansible/awx-operator/releases
- github.com/ansible/awx-operator/config/default?ref=2.19.1
- awx-maabara.yaml
# Set the image tags to match the git version from above
images:
- name: quay.io/ansible/awx-operator
newTag: 2.19.1
# Specify a custom namespace in which to install AWX
namespace: awx

View File

@ -34,7 +34,7 @@ metadata:
app.kubernetes.io/name: 'cert-manager'
app.kubernetes.io/instance: 'cert-manager'
# Generated labels
app.kubernetes.io/version: "v1.16.1"
app.kubernetes.io/version: "v1.17.2"
spec:
group: cert-manager.io
names:
@ -355,7 +355,7 @@ metadata:
app.kubernetes.io/name: 'cert-manager'
app.kubernetes.io/instance: 'cert-manager'
# Generated labels
app.kubernetes.io/version: "v1.16.1"
app.kubernetes.io/version: "v1.17.2"
spec:
group: cert-manager.io
names:
@ -537,7 +537,6 @@ spec:
type: object
required:
- create
- passwordSecretRef
properties:
alias:
description: |-
@ -549,17 +548,25 @@ spec:
Create enables JKS keystore creation for the Certificate.
If true, a file named `keystore.jks` will be created in the target
Secret resource, encrypted using the password stored in
`passwordSecretRef`.
`passwordSecretRef` or `password`.
The keystore file will be updated immediately.
If the issuer provided a CA certificate, a file named `truststore.jks`
will also be created in the target Secret resource, encrypted using the
password stored in `passwordSecretRef`
containing the issuing Certificate Authority
type: boolean
password:
description: |-
Password provides a literal password used to encrypt the JKS keystore.
Mutually exclusive with passwordSecretRef.
One of password or passwordSecretRef must provide a password with a non-zero length.
type: string
passwordSecretRef:
description: |-
PasswordSecretRef is a reference to a key in a Secret resource
PasswordSecretRef is a reference to a non-empty key in a Secret resource
containing the password used to encrypt the JKS keystore.
Mutually exclusive with password.
One of password or passwordSecretRef must provide a password with a non-zero length.
type: object
required:
- name
@ -582,24 +589,31 @@ spec:
type: object
required:
- create
- passwordSecretRef
properties:
create:
description: |-
Create enables PKCS12 keystore creation for the Certificate.
If true, a file named `keystore.p12` will be created in the target
Secret resource, encrypted using the password stored in
`passwordSecretRef`.
`passwordSecretRef` or in `password`.
The keystore file will be updated immediately.
If the issuer provided a CA certificate, a file named `truststore.p12` will
also be created in the target Secret resource, encrypted using the
password stored in `passwordSecretRef` containing the issuing Certificate
Authority
type: boolean
password:
description: |-
Password provides a literal password used to encrypt the PKCS#12 keystore.
Mutually exclusive with passwordSecretRef.
One of password or passwordSecretRef must provide a password with a non-zero length.
type: string
passwordSecretRef:
description: |-
PasswordSecretRef is a reference to a key in a Secret resource
containing the password used to encrypt the PKCS12 keystore.
PasswordSecretRef is a reference to a non-empty key in a Secret resource
containing the password used to encrypt the PKCS#12 keystore.
Mutually exclusive with password.
One of password or passwordSecretRef must provide a password with a non-zero length.
type: object
required:
- name
@ -1124,7 +1138,7 @@ metadata:
app.kubernetes.io/name: 'cert-manager'
app.kubernetes.io/instance: 'cert-manager'
# Generated labels
app.kubernetes.io/version: "v1.16.1"
app.kubernetes.io/version: "v1.17.2"
spec:
group: acme.cert-manager.io
names:
@ -1400,6 +1414,9 @@ spec:
resource ID of the managed identity, can not be used at the same time as clientID
Cannot be used for Azure Managed Service Identity
type: string
tenantID:
description: tenant ID of the managed identity, can not be used at the same time as resourceID
type: string
resourceGroupName:
description: resource group the DNS zone is located in
type: string
@ -4331,7 +4348,7 @@ metadata:
app.kubernetes.io/name: 'cert-manager'
app.kubernetes.io/instance: 'cert-manager'
# Generated labels
app.kubernetes.io/version: "v1.16.1"
app.kubernetes.io/version: "v1.17.2"
spec:
group: cert-manager.io
names:
@ -4714,6 +4731,9 @@ spec:
resource ID of the managed identity, can not be used at the same time as clientID
Cannot be used for Azure Managed Service Identity
type: string
tenantID:
description: tenant ID of the managed identity, can not be used at the same time as resourceID
type: string
resourceGroupName:
description: resource group the DNS zone is located in
type: string
@ -8059,7 +8079,7 @@ metadata:
app.kubernetes.io/instance: 'cert-manager'
app.kubernetes.io/component: "crds"
# Generated labels
app.kubernetes.io/version: "v1.16.1"
app.kubernetes.io/version: "v1.17.2"
spec:
group: cert-manager.io
names:
@ -8441,6 +8461,9 @@ spec:
resource ID of the managed identity, can not be used at the same time as clientID
Cannot be used for Azure Managed Service Identity
type: string
tenantID:
description: tenant ID of the managed identity, can not be used at the same time as resourceID
type: string
resourceGroupName:
description: resource group the DNS zone is located in
type: string
@ -11786,7 +11809,7 @@ metadata:
app.kubernetes.io/instance: 'cert-manager'
app.kubernetes.io/component: "crds"
# Generated labels
app.kubernetes.io/version: "v1.16.1"
app.kubernetes.io/version: "v1.17.2"
spec:
group: acme.cert-manager.io
names:
@ -12052,7 +12075,7 @@ metadata:
app.kubernetes.io/name: cainjector
app.kubernetes.io/instance: cert-manager
app.kubernetes.io/component: "cainjector"
app.kubernetes.io/version: "v1.16.1"
app.kubernetes.io/version: "v1.17.2"
---
# Source: cert-manager/templates/serviceaccount.yaml
apiVersion: v1
@ -12066,7 +12089,7 @@ metadata:
app.kubernetes.io/name: cert-manager
app.kubernetes.io/instance: cert-manager
app.kubernetes.io/component: "controller"
app.kubernetes.io/version: "v1.16.1"
app.kubernetes.io/version: "v1.17.2"
---
# Source: cert-manager/templates/webhook-serviceaccount.yaml
apiVersion: v1
@ -12080,7 +12103,7 @@ metadata:
app.kubernetes.io/name: webhook
app.kubernetes.io/instance: cert-manager
app.kubernetes.io/component: "webhook"
app.kubernetes.io/version: "v1.16.1"
app.kubernetes.io/version: "v1.17.2"
---
# Source: cert-manager/templates/cainjector-rbac.yaml
apiVersion: rbac.authorization.k8s.io/v1
@ -12092,7 +12115,7 @@ metadata:
app.kubernetes.io/name: cainjector
app.kubernetes.io/instance: cert-manager
app.kubernetes.io/component: "cainjector"
app.kubernetes.io/version: "v1.16.1"
app.kubernetes.io/version: "v1.17.2"
rules:
- apiGroups: ["cert-manager.io"]
resources: ["certificates"]
@ -12124,7 +12147,7 @@ metadata:
app.kubernetes.io/name: cert-manager
app.kubernetes.io/instance: cert-manager
app.kubernetes.io/component: "controller"
app.kubernetes.io/version: "v1.16.1"
app.kubernetes.io/version: "v1.17.2"
rules:
- apiGroups: ["cert-manager.io"]
resources: ["issuers", "issuers/status"]
@ -12150,7 +12173,7 @@ metadata:
app.kubernetes.io/name: cert-manager
app.kubernetes.io/instance: cert-manager
app.kubernetes.io/component: "controller"
app.kubernetes.io/version: "v1.16.1"
app.kubernetes.io/version: "v1.17.2"
rules:
- apiGroups: ["cert-manager.io"]
resources: ["clusterissuers", "clusterissuers/status"]
@ -12176,7 +12199,7 @@ metadata:
app.kubernetes.io/name: cert-manager
app.kubernetes.io/instance: cert-manager
app.kubernetes.io/component: "controller"
app.kubernetes.io/version: "v1.16.1"
app.kubernetes.io/version: "v1.17.2"
rules:
- apiGroups: ["cert-manager.io"]
resources: ["certificates", "certificates/status", "certificaterequests", "certificaterequests/status"]
@ -12211,7 +12234,7 @@ metadata:
app.kubernetes.io/name: cert-manager
app.kubernetes.io/instance: cert-manager
app.kubernetes.io/component: "controller"
app.kubernetes.io/version: "v1.16.1"
app.kubernetes.io/version: "v1.17.2"
rules:
- apiGroups: ["acme.cert-manager.io"]
resources: ["orders", "orders/status"]
@ -12249,7 +12272,7 @@ metadata:
app.kubernetes.io/name: cert-manager
app.kubernetes.io/instance: cert-manager
app.kubernetes.io/component: "controller"
app.kubernetes.io/version: "v1.16.1"
app.kubernetes.io/version: "v1.17.2"
rules:
# Use to update challenge resource status
- apiGroups: ["acme.cert-manager.io"]
@ -12309,7 +12332,7 @@ metadata:
app.kubernetes.io/name: cert-manager
app.kubernetes.io/instance: cert-manager
app.kubernetes.io/component: "controller"
app.kubernetes.io/version: "v1.16.1"
app.kubernetes.io/version: "v1.17.2"
rules:
- apiGroups: ["cert-manager.io"]
resources: ["certificates", "certificaterequests"]
@ -12346,7 +12369,7 @@ metadata:
app.kubernetes.io/name: cert-manager
app.kubernetes.io/instance: cert-manager
app.kubernetes.io/component: "controller"
app.kubernetes.io/version: "v1.16.1"
app.kubernetes.io/version: "v1.17.2"
rbac.authorization.k8s.io/aggregate-to-cluster-reader: "true"
rules:
- apiGroups: ["cert-manager.io"]
@ -12363,7 +12386,7 @@ metadata:
app.kubernetes.io/name: cert-manager
app.kubernetes.io/instance: cert-manager
app.kubernetes.io/component: "controller"
app.kubernetes.io/version: "v1.16.1"
app.kubernetes.io/version: "v1.17.2"
rbac.authorization.k8s.io/aggregate-to-view: "true"
rbac.authorization.k8s.io/aggregate-to-edit: "true"
rbac.authorization.k8s.io/aggregate-to-admin: "true"
@ -12386,7 +12409,7 @@ metadata:
app.kubernetes.io/name: cert-manager
app.kubernetes.io/instance: cert-manager
app.kubernetes.io/component: "controller"
app.kubernetes.io/version: "v1.16.1"
app.kubernetes.io/version: "v1.17.2"
rbac.authorization.k8s.io/aggregate-to-edit: "true"
rbac.authorization.k8s.io/aggregate-to-admin: "true"
rules:
@ -12411,7 +12434,7 @@ metadata:
app.kubernetes.io/name: cert-manager
app.kubernetes.io/instance: cert-manager
app.kubernetes.io/component: "cert-manager"
app.kubernetes.io/version: "v1.16.1"
app.kubernetes.io/version: "v1.17.2"
rules:
- apiGroups: ["cert-manager.io"]
resources: ["signers"]
@ -12433,7 +12456,7 @@ metadata:
app.kubernetes.io/name: cert-manager
app.kubernetes.io/instance: cert-manager
app.kubernetes.io/component: "cert-manager"
app.kubernetes.io/version: "v1.16.1"
app.kubernetes.io/version: "v1.17.2"
rules:
- apiGroups: ["certificates.k8s.io"]
resources: ["certificatesigningrequests"]
@ -12459,7 +12482,7 @@ metadata:
app.kubernetes.io/name: webhook
app.kubernetes.io/instance: cert-manager
app.kubernetes.io/component: "webhook"
app.kubernetes.io/version: "v1.16.1"
app.kubernetes.io/version: "v1.17.2"
rules:
- apiGroups: ["authorization.k8s.io"]
resources: ["subjectaccessreviews"]
@ -12475,7 +12498,7 @@ metadata:
app.kubernetes.io/name: cainjector
app.kubernetes.io/instance: cert-manager
app.kubernetes.io/component: "cainjector"
app.kubernetes.io/version: "v1.16.1"
app.kubernetes.io/version: "v1.17.2"
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
@ -12495,7 +12518,7 @@ metadata:
app.kubernetes.io/name: cert-manager
app.kubernetes.io/instance: cert-manager
app.kubernetes.io/component: "controller"
app.kubernetes.io/version: "v1.16.1"
app.kubernetes.io/version: "v1.17.2"
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
@ -12515,7 +12538,7 @@ metadata:
app.kubernetes.io/name: cert-manager
app.kubernetes.io/instance: cert-manager
app.kubernetes.io/component: "controller"
app.kubernetes.io/version: "v1.16.1"
app.kubernetes.io/version: "v1.17.2"
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
@ -12535,7 +12558,7 @@ metadata:
app.kubernetes.io/name: cert-manager
app.kubernetes.io/instance: cert-manager
app.kubernetes.io/component: "controller"
app.kubernetes.io/version: "v1.16.1"
app.kubernetes.io/version: "v1.17.2"
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
@ -12555,7 +12578,7 @@ metadata:
app.kubernetes.io/name: cert-manager
app.kubernetes.io/instance: cert-manager
app.kubernetes.io/component: "controller"
app.kubernetes.io/version: "v1.16.1"
app.kubernetes.io/version: "v1.17.2"
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
@ -12575,7 +12598,7 @@ metadata:
app.kubernetes.io/name: cert-manager
app.kubernetes.io/instance: cert-manager
app.kubernetes.io/component: "controller"
app.kubernetes.io/version: "v1.16.1"
app.kubernetes.io/version: "v1.17.2"
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
@ -12595,7 +12618,7 @@ metadata:
app.kubernetes.io/name: cert-manager
app.kubernetes.io/instance: cert-manager
app.kubernetes.io/component: "controller"
app.kubernetes.io/version: "v1.16.1"
app.kubernetes.io/version: "v1.17.2"
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
@ -12615,7 +12638,7 @@ metadata:
app.kubernetes.io/name: cert-manager
app.kubernetes.io/instance: cert-manager
app.kubernetes.io/component: "cert-manager"
app.kubernetes.io/version: "v1.16.1"
app.kubernetes.io/version: "v1.17.2"
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
@ -12635,7 +12658,7 @@ metadata:
app.kubernetes.io/name: cert-manager
app.kubernetes.io/instance: cert-manager
app.kubernetes.io/component: "cert-manager"
app.kubernetes.io/version: "v1.16.1"
app.kubernetes.io/version: "v1.17.2"
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
@ -12655,7 +12678,7 @@ metadata:
app.kubernetes.io/name: webhook
app.kubernetes.io/instance: cert-manager
app.kubernetes.io/component: "webhook"
app.kubernetes.io/version: "v1.16.1"
app.kubernetes.io/version: "v1.17.2"
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
@ -12677,7 +12700,7 @@ metadata:
app.kubernetes.io/name: cainjector
app.kubernetes.io/instance: cert-manager
app.kubernetes.io/component: "cainjector"
app.kubernetes.io/version: "v1.16.1"
app.kubernetes.io/version: "v1.17.2"
rules:
# Used for leader election by the controller
# cert-manager-cainjector-leader-election is used by the CertificateBased injector controller
@ -12703,7 +12726,7 @@ metadata:
app.kubernetes.io/name: cert-manager
app.kubernetes.io/instance: cert-manager
app.kubernetes.io/component: "controller"
app.kubernetes.io/version: "v1.16.1"
app.kubernetes.io/version: "v1.17.2"
rules:
- apiGroups: ["coordination.k8s.io"]
resources: ["leases"]
@ -12724,7 +12747,7 @@ metadata:
app.kubernetes.io/name: cert-manager
app.kubernetes.io/instance: cert-manager
app.kubernetes.io/component: "controller"
app.kubernetes.io/version: "v1.16.1"
app.kubernetes.io/version: "v1.17.2"
rules:
- apiGroups: [""]
resources: ["serviceaccounts/token"]
@ -12742,7 +12765,7 @@ metadata:
app.kubernetes.io/name: webhook
app.kubernetes.io/instance: cert-manager
app.kubernetes.io/component: "webhook"
app.kubernetes.io/version: "v1.16.1"
app.kubernetes.io/version: "v1.17.2"
rules:
- apiGroups: [""]
resources: ["secrets"]
@ -12767,7 +12790,7 @@ metadata:
app.kubernetes.io/name: cainjector
app.kubernetes.io/instance: cert-manager
app.kubernetes.io/component: "cainjector"
app.kubernetes.io/version: "v1.16.1"
app.kubernetes.io/version: "v1.17.2"
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
@ -12790,7 +12813,7 @@ metadata:
app.kubernetes.io/name: cert-manager
app.kubernetes.io/instance: cert-manager
app.kubernetes.io/component: "controller"
app.kubernetes.io/version: "v1.16.1"
app.kubernetes.io/version: "v1.17.2"
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
@ -12812,7 +12835,7 @@ metadata:
app.kubernetes.io/name: cert-manager
app.kubernetes.io/instance: cert-manager
app.kubernetes.io/component: "controller"
app.kubernetes.io/version: "v1.16.1"
app.kubernetes.io/version: "v1.17.2"
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
@ -12833,7 +12856,7 @@ metadata:
app.kubernetes.io/name: webhook
app.kubernetes.io/instance: cert-manager
app.kubernetes.io/component: "webhook"
app.kubernetes.io/version: "v1.16.1"
app.kubernetes.io/version: "v1.17.2"
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
@ -12854,7 +12877,7 @@ metadata:
app.kubernetes.io/name: cainjector
app.kubernetes.io/instance: cert-manager
app.kubernetes.io/component: "cainjector"
app.kubernetes.io/version: "v1.16.1"
app.kubernetes.io/version: "v1.17.2"
spec:
type: ClusterIP
ports:
@ -12877,7 +12900,7 @@ metadata:
app.kubernetes.io/name: cert-manager
app.kubernetes.io/instance: cert-manager
app.kubernetes.io/component: "controller"
app.kubernetes.io/version: "v1.16.1"
app.kubernetes.io/version: "v1.17.2"
spec:
type: ClusterIP
ports:
@ -12901,7 +12924,7 @@ metadata:
app.kubernetes.io/name: webhook
app.kubernetes.io/instance: cert-manager
app.kubernetes.io/component: "webhook"
app.kubernetes.io/version: "v1.16.1"
app.kubernetes.io/version: "v1.17.2"
spec:
type: ClusterIP
ports:
@ -12929,7 +12952,7 @@ metadata:
app.kubernetes.io/name: cainjector
app.kubernetes.io/instance: cert-manager
app.kubernetes.io/component: "cainjector"
app.kubernetes.io/version: "v1.16.1"
app.kubernetes.io/version: "v1.17.2"
spec:
replicas: 1
selector:
@ -12944,7 +12967,7 @@ spec:
app.kubernetes.io/name: cainjector
app.kubernetes.io/instance: cert-manager
app.kubernetes.io/component: "cainjector"
app.kubernetes.io/version: "v1.16.1"
app.kubernetes.io/version: "v1.17.2"
annotations:
prometheus.io/path: "/metrics"
prometheus.io/scrape: 'true'
@ -12958,7 +12981,7 @@ spec:
type: RuntimeDefault
containers:
- name: cert-manager-cainjector
image: "quay.io/jetstack/cert-manager-cainjector:v1.16.1"
image: "quay.io/jetstack/cert-manager-cainjector:v1.17.2"
imagePullPolicy: IfNotPresent
args:
- --v=2
@ -12992,7 +13015,7 @@ metadata:
app.kubernetes.io/name: cert-manager
app.kubernetes.io/instance: cert-manager
app.kubernetes.io/component: "controller"
app.kubernetes.io/version: "v1.16.1"
app.kubernetes.io/version: "v1.17.2"
spec:
replicas: 1
selector:
@ -13007,7 +13030,7 @@ spec:
app.kubernetes.io/name: cert-manager
app.kubernetes.io/instance: cert-manager
app.kubernetes.io/component: "controller"
app.kubernetes.io/version: "v1.16.1"
app.kubernetes.io/version: "v1.17.2"
annotations:
prometheus.io/path: "/metrics"
prometheus.io/scrape: 'true'
@ -13021,13 +13044,13 @@ spec:
type: RuntimeDefault
containers:
- name: cert-manager-controller
image: "quay.io/jetstack/cert-manager-controller:v1.16.1"
image: "quay.io/jetstack/cert-manager-controller:v1.17.2"
imagePullPolicy: IfNotPresent
args:
- --v=2
- --cluster-resource-namespace=$(POD_NAMESPACE)
- --leader-election-namespace=kube-system
- --acme-http01-solver-image=quay.io/jetstack/cert-manager-acmesolver:v1.16.1
- --acme-http01-solver-image=quay.io/jetstack/cert-manager-acmesolver:v1.17.2
- --max-concurrent-challenges=60
ports:
- containerPort: 9402
@ -13074,7 +13097,7 @@ metadata:
app.kubernetes.io/name: webhook
app.kubernetes.io/instance: cert-manager
app.kubernetes.io/component: "webhook"
app.kubernetes.io/version: "v1.16.1"
app.kubernetes.io/version: "v1.17.2"
spec:
replicas: 1
selector:
@ -13089,7 +13112,7 @@ spec:
app.kubernetes.io/name: webhook
app.kubernetes.io/instance: cert-manager
app.kubernetes.io/component: "webhook"
app.kubernetes.io/version: "v1.16.1"
app.kubernetes.io/version: "v1.17.2"
annotations:
prometheus.io/path: "/metrics"
prometheus.io/scrape: 'true'
@ -13103,7 +13126,7 @@ spec:
type: RuntimeDefault
containers:
- name: cert-manager-webhook
image: "quay.io/jetstack/cert-manager-webhook:v1.16.1"
image: "quay.io/jetstack/cert-manager-webhook:v1.17.2"
imagePullPolicy: IfNotPresent
args:
- --v=2
@ -13187,7 +13210,7 @@ metadata:
app.kubernetes.io/name: webhook
app.kubernetes.io/instance: cert-manager
app.kubernetes.io/component: "webhook"
app.kubernetes.io/version: "v1.16.1"
app.kubernetes.io/version: "v1.17.2"
annotations:
cert-manager.io/inject-ca-from-secret: "cert-manager/cert-manager-webhook-ca"
webhooks:
@ -13226,7 +13249,7 @@ metadata:
app.kubernetes.io/name: webhook
app.kubernetes.io/instance: cert-manager
app.kubernetes.io/component: "webhook"
app.kubernetes.io/version: "v1.16.1"
app.kubernetes.io/version: "v1.17.2"
annotations:
cert-manager.io/inject-ca-from-secret: "cert-manager/cert-manager-webhook-ca"
webhooks:

View File

@ -0,0 +1,18 @@
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
spec:
acme:
# The ACME server URL
server: https://acme-v02.api.letsencrypt.org/directory
# Email address used for ACME registration
email: YOUREMAILHERE@hello.com
# Name of a secret used to store the ACME account private key
privateKeySecretRef:
name: letsencrypt-prod
# Enable the HTTP-01 challenge provider
solvers:
- http01:
ingress:
ingressClassName: nginx

View File

@ -0,0 +1,18 @@
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-staging
spec:
acme:
# The ACME server URL
server: https://acme-staging-v02.api.letsencrypt.org/directory
# Email address used for ACME registration
email: YOUREMAILHERE@hello.com
# Name of a secret used to store the ACME account private key
privateKeySecretRef:
name: letsencrypt-staging
# Enable the HTTP-01 challenge provider
solvers:
- http01:
ingress:
ingressClassName: nginx

View File

@ -0,0 +1,121 @@
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: external-dns
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: external-dns
rules:
- apiGroups: [""]
resources: ["services","endpoints","pods"]
verbs: ["get","watch","list"]
- apiGroups: ["extensions","networking.k8s.io"]
resources: ["ingresses"]
verbs: ["get","watch","list"]
- apiGroups: [""]
resources: ["nodes"]
verbs: ["list","watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: external-dns-viewer
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: external-dns
subjects:
- kind: ServiceAccount
name: external-dns
namespace: default
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: external-dns-0
spec:
strategy:
type: Recreate
selector:
matchLabels:
app: external-dns
template:
metadata:
labels:
app: external-dns
spec:
serviceAccountName: external-dns
containers:
- name: external-dns-0
image: registry.k8s.io/external-dns/external-dns:v0.17.0
# If authentication is disabled and/or you didn't create
# a secret, you can remove this block.
envFrom:
- secretRef:
# Change this if you gave the secret a different name
name: pihole-externaldns-password
args:
- --source=service
- --source=ingress # other sources 'traefik-proxy' check documentation
# Pihole only supports A/AAAA/CNAME records so there is no mechanism to track ownership.
# You don't need to set this flag, but if you leave it unset, you will receive warning
# logs when ExternalDNS attempts to create TXT records.
- --registry=noop
# IMPORTANT: If you have records that you manage manually in Pi-hole, set
# the policy to upsert-only so they do not get deleted.
- --policy=upsert-only
- --provider=pihole # lots of other providers
#- --pihole-tls-skip-verify
- --pihole-api-version=6
# Change this to the actual address of your Pi-hole web server
#- --pihole-server=http://pihole-web.default.svc.cluster.local
- --pihole-server=http://192.168.0.239
securityContext:
fsGroup: 65534 # For ExternalDNS to be able to read Kubernetes token files
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: external-dns-1
spec:
strategy:
type: Recreate
selector:
matchLabels:
app: external-dns
template:
metadata:
labels:
app: external-dns
spec:
serviceAccountName: external-dns
containers:
- name: external-dns-1
image: registry.k8s.io/external-dns/external-dns:v0.17.0
# If authentication is disabled and/or you didn't create
# a secret, you can remove this block.
envFrom:
- secretRef:
# Change this if you gave the secret a different name
name: pihole-externaldns-password
args:
- --source=service
- --source=ingress # other sources 'traefik-proxy' check documentation
# Pihole only supports A/AAAA/CNAME records so there is no mechanism to track ownership.
# You don't need to set this flag, but if you leave it unset, you will receive warning
# logs when ExternalDNS attempts to create TXT records.
- --registry=noop
# IMPORTANT: If you have records that you manage manually in Pi-hole, set
# the policy to upsert-only so they do not get deleted.
- --policy=upsert-only
- --provider=pihole # lots of other providers
#- --pihole-tls-skip-verify
- --pihole-api-version=6
# Change this to the actual address of your Pi-hole web server
#- --pihole-server=http://pihole-web.default.svc.cluster.local
- --pihole-server=http://192.168.0.203
securityContext:
fsGroup: 65534 # For ExternalDNS to be able to read Kubernetes token files

View File

@ -15,7 +15,7 @@ metadata:
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.12.0-beta.0
app.kubernetes.io/version: 1.12.2
name: ingress-nginx
namespace: ingress-nginx
---
@ -28,7 +28,7 @@ metadata:
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.12.0-beta.0
app.kubernetes.io/version: 1.12.2
name: ingress-nginx-admission
namespace: ingress-nginx
---
@ -40,7 +40,7 @@ metadata:
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.12.0-beta.0
app.kubernetes.io/version: 1.12.2
name: ingress-nginx
namespace: ingress-nginx
rules:
@ -130,7 +130,7 @@ metadata:
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.12.0-beta.0
app.kubernetes.io/version: 1.12.2
name: ingress-nginx-admission
namespace: ingress-nginx
rules:
@ -149,7 +149,7 @@ metadata:
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.12.0-beta.0
app.kubernetes.io/version: 1.12.2
name: ingress-nginx
rules:
- apiGroups:
@ -231,7 +231,7 @@ metadata:
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.12.0-beta.0
app.kubernetes.io/version: 1.12.2
name: ingress-nginx-admission
rules:
- apiGroups:
@ -250,7 +250,7 @@ metadata:
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.12.0-beta.0
app.kubernetes.io/version: 1.12.2
name: ingress-nginx
namespace: ingress-nginx
roleRef:
@ -270,7 +270,7 @@ metadata:
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.12.0-beta.0
app.kubernetes.io/version: 1.12.2
name: ingress-nginx-admission
namespace: ingress-nginx
roleRef:
@ -289,7 +289,7 @@ metadata:
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.12.0-beta.0
app.kubernetes.io/version: 1.12.2
name: ingress-nginx
roleRef:
apiGroup: rbac.authorization.k8s.io
@ -308,7 +308,7 @@ metadata:
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.12.0-beta.0
app.kubernetes.io/version: 1.12.2
name: ingress-nginx-admission
roleRef:
apiGroup: rbac.authorization.k8s.io
@ -328,7 +328,7 @@ metadata:
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.12.0-beta.0
app.kubernetes.io/version: 1.12.2
name: ingress-nginx-controller
namespace: ingress-nginx
---
@ -340,9 +340,11 @@ metadata:
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.12.0-beta.0
app.kubernetes.io/version: 1.12.2
name: ingress-nginx-controller
namespace: ingress-nginx
annotations:
external-dns.alpha.kubernetes.io/hostname: www.homelab.local
spec:
externalTrafficPolicy: Local
ipFamilies:
@ -364,6 +366,7 @@ spec:
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
type: LoadBalancer
loadBalancerIP: 192.168.0.231
---
apiVersion: v1
kind: Service
@ -373,7 +376,7 @@ metadata:
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.12.0-beta.0
app.kubernetes.io/version: 1.12.2
name: ingress-nginx-controller-admission
namespace: ingress-nginx
spec:
@ -396,7 +399,7 @@ metadata:
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.12.0-beta.0
app.kubernetes.io/version: 1.12.2
name: ingress-nginx-controller
namespace: ingress-nginx
spec:
@ -418,7 +421,7 @@ spec:
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.12.0-beta.0
app.kubernetes.io/version: 1.12.2
spec:
containers:
- args:
@ -442,7 +445,7 @@ spec:
fieldPath: metadata.namespace
- name: LD_PRELOAD
value: /usr/local/lib/libmimalloc.so
image: registry.k8s.io/ingress-nginx/controller:v1.12.0-beta.0@sha256:9724476b928967173d501040631b23ba07f47073999e80e34b120e8db5f234d5
image: registry.k8s.io/ingress-nginx/controller:v1.12.2@sha256:03497ee984628e95eca9b2279e3f3a3c1685dd48635479e627d219f00c8eefa9
imagePullPolicy: IfNotPresent
lifecycle:
preStop:
@ -519,7 +522,7 @@ metadata:
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.12.0-beta.0
app.kubernetes.io/version: 1.12.2
name: ingress-nginx-admission-create
namespace: ingress-nginx
spec:
@ -530,7 +533,7 @@ spec:
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.12.0-beta.0
app.kubernetes.io/version: 1.12.2
name: ingress-nginx-admission-create
spec:
containers:
@ -544,7 +547,7 @@ spec:
valueFrom:
fieldRef:
fieldPath: metadata.namespace
image: registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f
image: registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.5.3@sha256:2cf4ebfa82a37c357455458f6dfc334aea1392d508270b2517795a9933a02524
imagePullPolicy: IfNotPresent
name: create
securityContext:
@ -571,7 +574,7 @@ metadata:
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.12.0-beta.0
app.kubernetes.io/version: 1.12.2
name: ingress-nginx-admission-patch
namespace: ingress-nginx
spec:
@ -582,7 +585,7 @@ spec:
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.12.0-beta.0
app.kubernetes.io/version: 1.12.2
name: ingress-nginx-admission-patch
spec:
containers:
@ -598,7 +601,7 @@ spec:
valueFrom:
fieldRef:
fieldPath: metadata.namespace
image: registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f
image: registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.5.3@sha256:2cf4ebfa82a37c357455458f6dfc334aea1392d508270b2517795a9933a02524
imagePullPolicy: IfNotPresent
name: patch
securityContext:
@ -625,7 +628,7 @@ metadata:
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.12.0-beta.0
app.kubernetes.io/version: 1.12.2
name: nginx
spec:
controller: k8s.io/ingress-nginx
@ -638,7 +641,7 @@ metadata:
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.12.0-beta.0
app.kubernetes.io/version: 1.12.2
name: ingress-nginx-admission
webhooks:
- admissionReviewVersions:

View File

@ -0,0 +1,33 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*~
# Various IDEs
.project
.idea/
*.tmproj
# helm/charts
OWNERS
hack/
ci/
kube-prometheus-*.tgz
unittests/
files/dashboards/
UPGRADE.md
CONTRIBUTING.md
.editorconfig

View File

@ -0,0 +1,18 @@
dependencies:
- name: crds
repository: ""
version: 0.0.0
- name: kube-state-metrics
repository: https://prometheus-community.github.io/helm-charts
version: 5.33.2
- name: prometheus-node-exporter
repository: https://prometheus-community.github.io/helm-charts
version: 4.46.1
- name: grafana
repository: https://grafana.github.io/helm-charts
version: 9.2.1
- name: prometheus-windows-exporter
repository: https://prometheus-community.github.io/helm-charts
version: 0.10.2
digest: sha256:3e5fce8bd854987be2beaa39cc1460bd3efd70e73fc1cc04b8fe568cdf4bc8f8
generated: "2025-06-02T13:47:53.376525267Z"

View File

@ -0,0 +1,75 @@
annotations:
artifacthub.io/license: Apache-2.0
artifacthub.io/links: |
- name: Chart Source
url: https://github.com/prometheus-community/helm-charts
- name: Upstream Project
url: https://github.com/prometheus-operator/kube-prometheus
- name: Upgrade Process
url: https://github.com/prometheus-community/helm-charts/blob/main/charts/kube-prometheus-stack/README.md#upgrading-chart
artifacthub.io/operator: "true"
apiVersion: v2
appVersion: v0.82.2
dependencies:
- condition: crds.enabled
name: crds
repository: ""
version: 0.0.0
- condition: kubeStateMetrics.enabled
name: kube-state-metrics
repository: https://prometheus-community.github.io/helm-charts
version: 5.33.*
- condition: nodeExporter.enabled
name: prometheus-node-exporter
repository: https://prometheus-community.github.io/helm-charts
version: 4.46.1
- condition: grafana.enabled
name: grafana
repository: https://grafana.github.io/helm-charts
version: 9.2.1
- condition: windowsMonitoring.enabled
name: prometheus-windows-exporter
repository: https://prometheus-community.github.io/helm-charts
version: 0.10.*
description: kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards,
and Prometheus rules combined with documentation and scripts to provide easy to
operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus
Operator.
home: https://github.com/prometheus-operator/kube-prometheus
icon: https://raw.githubusercontent.com/prometheus/prometheus.github.io/master/assets/prometheus_logo-cb55bb5c346.png
keywords:
- operator
- prometheus
- kube-prometheus
kubeVersion: '>=1.19.0-0'
maintainers:
- email: andrew@quadcorps.co.uk
name: andrewgkew
url: https://github.com/andrewgkew
- email: gianrubio@gmail.com
name: gianrubio
url: https://github.com/gianrubio
- email: github.gkarthiks@gmail.com
name: gkarthiks
url: https://github.com/gkarthiks
- email: kube-prometheus-stack@sisti.pt
name: GMartinez-Sisti
url: https://github.com/GMartinez-Sisti
- email: github@jkroepke.de
name: jkroepke
url: https://github.com/jkroepke
- email: scott@r6by.com
name: scottrigby
url: https://github.com/scottrigby
- email: miroslav.hadzhiev@gmail.com
name: Xtigyro
url: https://github.com/Xtigyro
- email: quentin.bisson@gmail.com
name: QuentinBisson
url: https://github.com/QuentinBisson
name: kube-prometheus-stack
sources:
- https://github.com/prometheus-community/helm-charts
- https://github.com/prometheus-operator/kube-prometheus
type: application
version: 72.9.1

View File

@ -0,0 +1,356 @@
# kube-prometheus-stack
Installs core components of the [kube-prometheus stack](https://github.com/prometheus-operator/kube-prometheus), a collection of Kubernetes manifests, [Grafana](http://grafana.com/) dashboards, and [Prometheus rules](https://prometheus.io/docs/prometheus/latest/configuration/recording_rules/) combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with [Prometheus](https://prometheus.io/) using the [Prometheus Operator](https://github.com/prometheus-operator/prometheus-operator).
See the [kube-prometheus](https://github.com/prometheus-operator/kube-prometheus) readme for details about components, dashboards, and alerts.
_Note: This chart was formerly named `prometheus-operator` chart, now renamed to more clearly reflect that it installs the `kube-prometheus` project stack, within which Prometheus Operator is only one component. This chart does not install all components of `kube-prometheus`, notably excluding the Prometheus Adapter and Prometheus black-box exporter._
## Prerequisites
- Kubernetes 1.19+
- Helm 3+
## Get Helm Repository Info
```console
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo update
```
_See [`helm repo`](https://helm.sh/docs/helm/helm_repo/) for command documentation._
## Install Helm Chart
```console
helm install [RELEASE_NAME] prometheus-community/kube-prometheus-stack
```
_See [configuration](#configuration) below._
_See [helm install](https://helm.sh/docs/helm/helm_install/) for command documentation._
## Dependencies
By default this chart installs additional, dependent charts:
- [prometheus-community/kube-state-metrics](https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-state-metrics)
- [prometheus-community/prometheus-node-exporter](https://github.com/prometheus-community/helm-charts/tree/main/charts/prometheus-node-exporter)
- [grafana/grafana](https://github.com/grafana/helm-charts/tree/main/charts/grafana)
To disable dependencies during installation, see [multiple releases](#multiple-releases) below.
_See [helm dependency](https://helm.sh/docs/helm/helm_dependency/) for command documentation._
### Grafana Dashboards
This chart provisions a collection of curated Grafana dashboards that are automatically loaded into Grafana via ConfigMaps. These dashboards are rendered into the Helm chart under [`templates/grafana/`](https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack/templates/grafana/), but **this is not their source of truth**.
The dashboards originate from various upstream projects and are gathered and processed using scripts in the [`hack/`](https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack/hack) directory. For details on how these dashboards are sourced and kept up to date, refer to the [hack/README.md](https://github.com/prometheus-community/helm-charts/blob/main/charts/kube-prometheus-stack/hack/README.md).
> **Note:** The dashboards referenced in the `hack` scripts are usually **not the original source** either. Most originate from separate **Prometheus mixin repositories** (e.g., [kubernetes-mixin](https://github.com/kubernetes-monitoring/kubernetes-mixin)) and are processed through `jsonnet` tooling before being included here. To find the original source in case you want to modify it you may have to search even further upstream.
If you wish to contribute or modify dashboards, please follow the guidance in the `hack/README.md` to ensure consistency and reproducibility.
## Uninstall Helm Chart
```console
helm uninstall [RELEASE_NAME]
```
This removes all the Kubernetes components associated with the chart and deletes the release.
_See [helm uninstall](https://helm.sh/docs/helm/helm_uninstall/) for command documentation._
CRDs created by this chart are not removed by default and should be manually cleaned up:
```console
kubectl delete crd alertmanagerconfigs.monitoring.coreos.com
kubectl delete crd alertmanagers.monitoring.coreos.com
kubectl delete crd podmonitors.monitoring.coreos.com
kubectl delete crd probes.monitoring.coreos.com
kubectl delete crd prometheusagents.monitoring.coreos.com
kubectl delete crd prometheuses.monitoring.coreos.com
kubectl delete crd prometheusrules.monitoring.coreos.com
kubectl delete crd scrapeconfigs.monitoring.coreos.com
kubectl delete crd servicemonitors.monitoring.coreos.com
kubectl delete crd thanosrulers.monitoring.coreos.com
```
## Upgrading Chart
```console
helm upgrade [RELEASE_NAME] prometheus-community/kube-prometheus-stack
```
With Helm v3, CRDs created by this chart are not updated by default and should be manually updated.
Consult also the [Helm Documentation on CRDs](https://helm.sh/docs/chart_best_practices/custom_resource_definitions).
CRDs update lead to a major version bump.
The Chart's [appVersion](https://github.com/prometheus-community/helm-charts/blob/13ed7098db2f78c2bbcdab6c1c3c7a95b4b94574/charts/kube-prometheus-stack/Chart.yaml#L36) refers to the [`prometheus-operator`](https://github.com/prometheus-operator/prometheus-operator/tree/main)'s version with matching CRDs.
_See [helm upgrade](https://helm.sh/docs/helm/helm_upgrade/) for command documentation._
### Upgrading an existing Release to a new major version
A major chart version change (like v1.2.3 -> v2.0.0) indicates that there is an incompatible breaking change needing manual actions.
See [UPGRADE.md](https://github.com/prometheus-community/helm-charts/blob/main/charts/kube-prometheus-stack/UPGRADE.md)
for breaking changes between versions.
## Configuration
See [Customizing the Chart Before Installing](https://helm.sh/docs/intro/using_helm/#customizing-the-chart-before-installing). To see all configurable options with detailed comments:
```console
helm show values prometheus-community/kube-prometheus-stack
```
You may also `helm show values` on this chart's [dependencies](#dependencies) for additional options.
### Multiple releases
The same chart can be used to run multiple Prometheus instances in the same cluster if required. To achieve this, it is necessary to run only one instance of prometheus-operator and a pair of alertmanager pods for an HA configuration, while all other components need to be disabled. To disable a dependency during installation, set `kubeStateMetrics.enabled`, `nodeExporter.enabled` and `grafana.enabled` to `false`.
## Work-Arounds for Known Issues
### Running on private GKE clusters
When Google configure the control plane for private clusters, they automatically configure VPC peering between your Kubernetes clusters network and a separate Google managed project. In order to restrict what Google are able to access within your cluster, the firewall rules configured restrict access to your Kubernetes pods. This means that in order to use the webhook component with a GKE private cluster, you must configure an additional firewall rule to allow the GKE control plane access to your webhook pod.
You can read more information on how to add firewall rules for the GKE control plane nodes in the [GKE docs](https://cloud.google.com/kubernetes-engine/docs/how-to/private-clusters#add_firewall_rules)
Alternatively, you can disable the hooks by setting `prometheusOperator.admissionWebhooks.enabled=false`.
## PrometheusRules Admission Webhooks
With Prometheus Operator version 0.30+, the core Prometheus Operator pod exposes an endpoint that will integrate with the `validatingwebhookconfiguration` Kubernetes feature to prevent malformed rules from being added to the cluster.
### How the Chart Configures the Hooks
A validating and mutating webhook configuration requires the endpoint to which the request is sent to use TLS. It is possible to set up custom certificates to do this, but in most cases, a self-signed certificate is enough. The setup of this component requires some more complex orchestration when using helm. The steps are created to be idempotent and to allow turning the feature on and off without running into helm quirks.
1. A pre-install hook provisions a certificate into the same namespace using a format compatible with provisioning using end user certificates. If the certificate already exists, the hook exits.
2. The prometheus operator pod is configured to use a TLS proxy container, which will load that certificate.
3. Validating and Mutating webhook configurations are created in the cluster, with their failure mode set to Ignore. This allows rules to be created by the same chart at the same time, even though the webhook has not yet been fully set up - it does not have the correct CA field set.
4. A post-install hook reads the CA from the secret created by step 1 and patches the Validating and Mutating webhook configurations. This process will allow a custom CA provisioned by some other process to also be patched into the webhook configurations. The chosen failure policy is also patched into the webhook configurations
### Alternatives
It should be possible to use [jetstack/cert-manager](https://github.com/jetstack/cert-manager) if a more complete solution is required, but it has not been tested.
You can enable automatic self-signed TLS certificate provisioning via cert-manager by setting the `prometheusOperator.admissionWebhooks.certManager.enabled` value to true.
### Limitations
Because the operator can only run as a single pod, there is potential for this component failure to cause rule deployment failure. Because this risk is outweighed by the benefit of having validation, the feature is enabled by default.
## Developing Prometheus Rules and Grafana Dashboards
This chart Grafana Dashboards and Prometheus Rules are just a copy from [prometheus-operator/prometheus-operator](https://github.com/prometheus-operator/prometheus-operator) and other sources, synced (with alterations) by scripts in [hack](hack) folder. In order to introduce any changes you need to first [add them to the original repository](https://github.com/prometheus-operator/kube-prometheus/blob/main/docs/customizations/developing-prometheus-rules-and-grafana-dashboards.md) and then sync there by scripts.
## Further Information
For more in-depth documentation of configuration options meanings, please see
- [Prometheus Operator](https://github.com/prometheus-operator/prometheus-operator)
- [Prometheus](https://prometheus.io/docs/introduction/overview/)
- [Grafana](https://github.com/grafana/helm-charts/tree/main/charts/grafana#grafana-helm-chart)
## prometheus.io/scrape
The prometheus operator does not support annotation-based discovery of services, using the `PodMonitor` or `ServiceMonitor` CRD in its place as they provide far more configuration options.
For information on how to use PodMonitors/ServiceMonitors, please see the documentation on the `prometheus-operator/prometheus-operator` documentation here:
- [ServiceMonitors](https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/developer/getting-started.md#using-servicemonitors)
- [PodMonitors](https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/developer/getting-started.md#using-podmonitors)
- [Running Exporters](https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/user-guides/running-exporters.md)
By default, Prometheus discovers PodMonitors and ServiceMonitors within its namespace, that are labeled with the same release tag as the prometheus-operator release.
Sometimes, you may need to discover custom PodMonitors/ServiceMonitors, for example used to scrape data from third-party applications.
An easy way of doing this, without compromising the default PodMonitors/ServiceMonitors discovery, is allowing Prometheus to discover all PodMonitors/ServiceMonitors within its namespace, without applying label filtering.
To do so, you can set `prometheus.prometheusSpec.podMonitorSelectorNilUsesHelmValues` and `prometheus.prometheusSpec.serviceMonitorSelectorNilUsesHelmValues` to `false`.
## Migrating from stable/prometheus-operator chart
## Zero downtime
Since `kube-prometheus-stack` is fully compatible with the `stable/prometheus-operator` chart, a migration without downtime can be achieved.
However, the old name prefix needs to be kept. If you want the new name please follow the step by step guide below (with downtime).
You can override the name to achieve this:
```console
helm upgrade prometheus-operator prometheus-community/kube-prometheus-stack -n monitoring --reuse-values --set nameOverride=prometheus-operator
```
**Note**: It is recommended to run this first with `--dry-run --debug`.
## Redeploy with new name (downtime)
If the **prometheus-operator** values are compatible with the new **kube-prometheus-stack** chart, please follow the below steps for migration:
> The guide presumes that chart is deployed in `monitoring` namespace and the deployments are running there. If in other namespace, please replace the `monitoring` to the deployed namespace.
1. Patch the PersistenceVolume created/used by the prometheus-operator chart to `Retain` claim policy:
```console
kubectl patch pv/<PersistentVolume name> -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}'
```
**Note:** To execute the above command, the user must have a cluster wide permission. Please refer [Kubernetes RBAC](https://kubernetes.io/docs/reference/access-authn-authz/rbac/)
2. Uninstall the **prometheus-operator** release and delete the existing PersistentVolumeClaim, and verify PV become Released.
```console
helm uninstall prometheus-operator -n monitoring
kubectl delete pvc/<PersistenceVolumeClaim name> -n monitoring
```
Additionally, you have to manually remove the remaining `prometheus-operator-kubelet` service.
```console
kubectl delete service/prometheus-operator-kubelet -n kube-system
```
You can choose to remove all your existing CRDs (ServiceMonitors, Podmonitors, etc.) if you want to.
3. Remove current `spec.claimRef` values to change the PV's status from Released to Available.
```console
kubectl patch pv/<PersistentVolume name> --type json -p='[{"op": "remove", "path": "/spec/claimRef"}]' -n monitoring
```
**Note:** To execute the above command, the user must have a cluster wide permission. Please refer to [Kubernetes RBAC](https://kubernetes.io/docs/reference/access-authn-authz/rbac/)
After these steps, proceed to a fresh **kube-prometheus-stack** installation and make sure the current release of **kube-prometheus-stack** matching the `volumeClaimTemplate` values in the `values.yaml`.
The binding is done via matching a specific amount of storage requested and with certain access modes.
For example, if you had storage specified as this with **prometheus-operator**:
```yaml
volumeClaimTemplate:
spec:
storageClassName: gp2
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 50Gi
```
You have to specify matching `volumeClaimTemplate` with 50Gi storage and `ReadWriteOnce` access mode.
Additionally, you should check the current AZ of your legacy installation's PV, and configure the fresh release to use the same AZ as the old one. If the pods are in a different AZ than the PV, the release will fail to bind the existing one, hence creating a new PV.
This can be achieved either by specifying the labels through `values.yaml`, e.g. setting `prometheus.prometheusSpec.nodeSelector` to:
```yaml
nodeSelector:
failure-domain.beta.kubernetes.io/zone: east-west-1a
```
or passing these values as `--set` overrides during installation.
The new release should now re-attach your previously released PV with its content.
## Migrating from coreos/prometheus-operator chart
The multiple charts have been combined into a single chart that installs prometheus operator, prometheus, alertmanager, grafana as well as the multitude of exporters necessary to monitor a cluster.
There is no simple and direct migration path between the charts as the changes are extensive and intended to make the chart easier to support.
The capabilities of the old chart are all available in the new chart, including the ability to run multiple prometheus instances on a single cluster - you will need to disable the parts of the chart you do not wish to deploy.
You can check out the tickets for this change [here](https://github.com/prometheus-operator/prometheus-operator/issues/592) and [here](https://github.com/helm/charts/pull/6765).
### High-level overview of Changes
#### Added dependencies
The chart has added 3 [dependencies](#dependencies).
- Node-Exporter, Kube-State-Metrics: These components are loaded as dependencies into the chart, and are relatively simple components
- Grafana: The Grafana chart is more feature-rich than this chart - it contains a sidecar that is able to load data sources and dashboards from configmaps deployed into the same cluster. For more information check out the [documentation for the chart](https://github.com/grafana/helm-charts/blob/main/charts/grafana/README.md)
#### Kubelet Service
Because the kubelet service has a new name in the chart, make sure to clean up the old kubelet service in the `kube-system` namespace to prevent counting container metrics twice.
#### Persistent Volumes
If you would like to keep the data of the current persistent volumes, it should be possible to attach existing volumes to new PVCs and PVs that are created using the conventions in the new chart. For example, in order to use an existing Azure disk for a helm release called `prometheus-migration` the following resources can be created:
```yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: pvc-prometheus-migration-prometheus-0
spec:
accessModes:
- ReadWriteOnce
azureDisk:
cachingMode: None
diskName: pvc-prometheus-migration-prometheus-0
diskURI: /subscriptions/f5125d82-2622-4c50-8d25-3f7ba3e9ac4b/resourceGroups/sample-migration-resource-group/providers/Microsoft.Compute/disks/pvc-prometheus-migration-prometheus-0
fsType: ""
kind: Managed
readOnly: false
capacity:
storage: 1Gi
persistentVolumeReclaimPolicy: Delete
storageClassName: prometheus
volumeMode: Filesystem
```
```yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
labels:
app.kubernetes.io/name: prometheus
prometheus: prometheus-migration-prometheus
name: prometheus-prometheus-migration-prometheus-db-prometheus-prometheus-migration-prometheus-0
namespace: monitoring
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: prometheus
volumeMode: Filesystem
volumeName: pvc-prometheus-migration-prometheus-0
```
The PVC will take ownership of the PV and when you create a release using a persistent volume claim template it will use the existing PVCs as they match the naming convention used by the chart. For other cloud providers similar approaches can be used.
#### KubeProxy
The metrics bind address of kube-proxy is default to `127.0.0.1:10249` that prometheus instances **cannot** access to. You should expose metrics by changing `metricsBindAddress` field value to `0.0.0.0:10249` if you want to collect them.
Depending on the cluster, the relevant part `config.conf` will be in ConfigMap `kube-system/kube-proxy` or `kube-system/kube-proxy-config`. For example:
```console
kubectl -n kube-system edit cm kube-proxy
```
```yaml
apiVersion: v1
data:
config.conf: |-
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
# ...
# metricsBindAddress: 127.0.0.1:10249
metricsBindAddress: 0.0.0.0:10249
# ...
kubeconfig.conf: |-
# ...
kind: ConfigMap
metadata:
labels:
app: kube-proxy
name: kube-proxy
namespace: kube-system
```

View File

@ -0,0 +1,3 @@
apiVersion: v2
name: crds
version: 0.0.0

View File

@ -0,0 +1,3 @@
# crds subchart
See: [https://github.com/prometheus-community/helm-charts/issues/3548](https://github.com/prometheus-community/helm-charts/issues/3548)

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,160 @@
# https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.82.2/example/prometheus-operator-crd/monitoring.coreos.com_prometheusrules.yaml
---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.17.2
operator.prometheus.io/version: 0.82.2
name: prometheusrules.monitoring.coreos.com
spec:
group: monitoring.coreos.com
names:
categories:
- prometheus-operator
kind: PrometheusRule
listKind: PrometheusRuleList
plural: prometheusrules
shortNames:
- promrule
singular: prometheusrule
scope: Namespaced
versions:
- name: v1
schema:
openAPIV3Schema:
description: |-
The `PrometheusRule` custom resource definition (CRD) defines [alerting](https://prometheus.io/docs/prometheus/latest/configuration/alerting_rules/) and [recording](https://prometheus.io/docs/prometheus/latest/configuration/recording_rules/) rules to be evaluated by `Prometheus` or `ThanosRuler` objects.
`Prometheus` and `ThanosRuler` objects select `PrometheusRule` objects using label and namespace selectors.
properties:
apiVersion:
description: |-
APIVersion defines the versioned schema of this representation of an object.
Servers should convert recognized schemas to the latest internal value, and
may reject unrecognized values.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
type: string
kind:
description: |-
Kind is a string value representing the REST resource this object represents.
Servers may infer this from the endpoint the client submits requests to.
Cannot be updated.
In CamelCase.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
type: string
metadata:
type: object
spec:
description: Specification of desired alerting rule definitions for Prometheus.
properties:
groups:
description: Content of Prometheus rule file
items:
description: RuleGroup is a list of sequentially evaluated recording
and alerting rules.
properties:
interval:
description: Interval determines how often rules in the group
are evaluated.
pattern: ^(0|(([0-9]+)y)?(([0-9]+)w)?(([0-9]+)d)?(([0-9]+)h)?(([0-9]+)m)?(([0-9]+)s)?(([0-9]+)ms)?)$
type: string
labels:
additionalProperties:
type: string
description: |-
Labels to add or overwrite before storing the result for its rules.
The labels defined at the rule level take precedence.
It requires Prometheus >= 3.0.0.
The field is ignored for Thanos Ruler.
type: object
limit:
description: |-
Limit the number of alerts an alerting rule and series a recording
rule can produce.
Limit is supported starting with Prometheus >= 2.31 and Thanos Ruler >= 0.24.
type: integer
name:
description: Name of the rule group.
minLength: 1
type: string
partial_response_strategy:
description: |-
PartialResponseStrategy is only used by ThanosRuler and will
be ignored by Prometheus instances.
More info: https://github.com/thanos-io/thanos/blob/main/docs/components/rule.md#partial-response
pattern: ^(?i)(abort|warn)?$
type: string
query_offset:
description: |-
Defines the offset the rule evaluation timestamp of this particular group by the specified duration into the past.
It requires Prometheus >= v2.53.0.
It is not supported for ThanosRuler.
pattern: ^(0|(([0-9]+)y)?(([0-9]+)w)?(([0-9]+)d)?(([0-9]+)h)?(([0-9]+)m)?(([0-9]+)s)?(([0-9]+)ms)?)$
type: string
rules:
description: List of alerting and recording rules.
items:
description: |-
Rule describes an alerting or recording rule
See Prometheus documentation: [alerting](https://www.prometheus.io/docs/prometheus/latest/configuration/alerting_rules/) or [recording](https://www.prometheus.io/docs/prometheus/latest/configuration/recording_rules/#recording-rules) rule
properties:
alert:
description: |-
Name of the alert. Must be a valid label value.
Only one of `record` and `alert` must be set.
type: string
annotations:
additionalProperties:
type: string
description: |-
Annotations to add to each alert.
Only valid for alerting rules.
type: object
expr:
anyOf:
- type: integer
- type: string
description: PromQL expression to evaluate.
x-kubernetes-int-or-string: true
for:
description: Alerts are considered firing once they have
been returned for this long.
pattern: ^(0|(([0-9]+)y)?(([0-9]+)w)?(([0-9]+)d)?(([0-9]+)h)?(([0-9]+)m)?(([0-9]+)s)?(([0-9]+)ms)?)$
type: string
keep_firing_for:
description: KeepFiringFor defines how long an alert will
continue firing after the condition that triggered it
has cleared.
minLength: 1
pattern: ^(0|(([0-9]+)y)?(([0-9]+)w)?(([0-9]+)d)?(([0-9]+)h)?(([0-9]+)m)?(([0-9]+)s)?(([0-9]+)ms)?)$
type: string
labels:
additionalProperties:
type: string
description: Labels to add or overwrite.
type: object
record:
description: |-
Name of the time series to output to. Must be a valid metric name.
Only one of `record` and `alert` must be set.
type: string
required:
- expr
type: object
type: array
required:
- name
type: object
type: array
x-kubernetes-list-map-keys:
- name
x-kubernetes-list-type: map
type: object
required:
- spec
type: object
served: true
storage: true

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,20 @@
{{/* Shortened name suffixed with upgrade-crd */}}
{{- define "kube-prometheus-stack.crd.upgradeJob.name" -}}
{{- print (include "kube-prometheus-stack.fullname" .) "-upgrade" -}}
{{- end -}}
{{- define "kube-prometheus-stack.crd.upgradeJob.labels" -}}
{{- include "kube-prometheus-stack.labels" . }}
app: {{ template "kube-prometheus-stack.name" . }}-operator
app.kubernetes.io/name: {{ template "kube-prometheus-stack.name" . }}-prometheus-operator
app.kubernetes.io/component: crds-upgrade
{{- end -}}
{{/* Create the name of crd.upgradeJob service account to use */}}
{{- define "kube-prometheus-stack.crd.upgradeJob.serviceAccountName" -}}
{{- if .Values.upgradeJob.serviceAccount.create -}}
{{ default (include "kube-prometheus-stack.crd.upgradeJob.name" .) .Values.upgradeJob.serviceAccount.name }}
{{- else -}}
{{ default "default" .Values.upgradeJob.serviceAccount.name }}
{{- end -}}
{{- end -}}

View File

@ -0,0 +1,28 @@
{{- if .Values.upgradeJob.enabled }}
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: {{ template "kube-prometheus-stack.crd.upgradeJob.name" . }}
namespace: {{ template "kube-prometheus-stack.namespace" . }}
annotations:
"helm.sh/hook": pre-install,pre-upgrade,pre-rollback
"helm.sh/hook-weight": "-5"
"helm.sh/hook-delete-policy": before-hook-creation,hook-succeeded
labels:
{{- include "kube-prometheus-stack.crd.upgradeJob.labels" . | nindent 4 }}
rules:
- apiGroups:
- "apiextensions.k8s.io"
resources:
- "customresourcedefinitions"
verbs:
- create
- patch
- update
- get
- list
resourceNames:
{{- range $path, $_ := $.Files.Glob "crds/*.yaml" }}
- {{ ($.Files.Get $path | fromYaml ).metadata.name }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,21 @@
{{- if .Values.upgradeJob.enabled }}
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: {{ template "kube-prometheus-stack.crd.upgradeJob.name" . }}
namespace: {{ template "kube-prometheus-stack.namespace" . }}
annotations:
"helm.sh/hook": pre-install,pre-upgrade,pre-rollback
"helm.sh/hook-weight": "-3"
"helm.sh/hook-delete-policy": before-hook-creation,hook-succeeded
labels:
{{- include "kube-prometheus-stack.crd.upgradeJob.labels" . | nindent 4 }}
subjects:
- kind: ServiceAccount
namespace: {{ template "kube-prometheus-stack.namespace" . }}
name: {{ template "kube-prometheus-stack.crd.upgradeJob.serviceAccountName" . }}
roleRef:
kind: ClusterRole
name: {{ template "kube-prometheus-stack.crd.upgradeJob.name" . }}
apiGroup: rbac.authorization.k8s.io
{{- end }}

View File

@ -0,0 +1,15 @@
{{- if .Values.upgradeJob.enabled }}
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ include "kube-prometheus-stack.crd.upgradeJob.serviceAccountName" . }}
namespace: {{ template "kube-prometheus-stack.namespace" . }}
annotations:
"helm.sh/hook": pre-install,pre-upgrade,pre-rollback
"helm.sh/hook-weight": "-2"
"helm.sh/hook-delete-policy": before-hook-creation,hook-succeeded
labels:
{{- include "kube-prometheus-stack.crd.upgradeJob.labels" . | nindent 4 }}
binaryData:
crds.bz2: {{ .Files.Get "files/crds.bz2" | b64enc }}
{{- end }}

View File

@ -0,0 +1,146 @@
{{- if .Values.upgradeJob.enabled }}
apiVersion: batch/v1
kind: Job
metadata:
name: {{ template "kube-prometheus-stack.crd.upgradeJob.name" . }}
namespace: {{ template "kube-prometheus-stack.namespace" . }}
annotations:
"helm.sh/hook": pre-install,pre-upgrade,pre-rollback
"helm.sh/hook-weight": "5"
"helm.sh/hook-delete-policy": before-hook-creation,hook-succeeded
{{- with .Values.upgradeJob.annotations }}
{{- toYaml . | nindent 4 }}
{{- end }}
labels:
{{- include "kube-prometheus-stack.crd.upgradeJob.labels" . | nindent 4 }}
{{- with .Values.upgradeJob.labels }}
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
backoffLimit: 3
template:
metadata:
{{- with .Values.upgradeJob.podLabels }}
labels:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.upgradeJob.podAnnotations }}
annotations:
{{- toYaml . | nindent 8 }}
{{- end }}
spec:
{{- if .Values.global.imagePullSecrets }}
imagePullSecrets:
{{- include "kube-prometheus-stack.imagePullSecrets" . | indent 8 }}
{{- end }}
serviceAccountName: {{ include "kube-prometheus-stack.crd.upgradeJob.serviceAccountName" . }}
initContainers:
- name: busybox
{{- $busyboxRegistry := .Values.global.imageRegistry | default .Values.upgradeJob.image.busybox.registry -}}
{{- if .Values.upgradeJob.image.sha }}
image: "{{ $busyboxRegistry }}/{{ .Values.upgradeJob.image.busybox.repository }}:{{ .Values.upgradeJob.image.busybox.tag }}@sha256:{{ .Values.upgradeJob.image.busybox.sha }}"
{{- else }}
image: "{{ $busyboxRegistry }}/{{ .Values.upgradeJob.image.busybox.repository }}:{{ .Values.upgradeJob.image.busybox.tag }}"
{{- end }}
imagePullPolicy: "{{ .Values.upgradeJob.image.busybox.pullPolicy }}"
workingDir: /tmp/
command:
- sh
args:
- -c
- bzcat /crds/crds.bz2 > /tmp/crds.yaml
{{- with .Values.upgradeJob.resources }}
resources:
{{- toYaml . | nindent 12 }}
{{- end }}
{{- with .Values.upgradeJob.containerSecurityContext }}
securityContext:
{{- toYaml . | nindent 12 }}
{{- end }}
volumeMounts:
- mountPath: /crds/
name: crds
- mountPath: /tmp/
name: tmp
{{- with .Values.upgradeJob.extraVolumeMounts }}
{{- toYaml . | nindent 12 }}
{{- end }}
{{- with .Values.upgradeJob.env }}
env:
{{- range $key, $value := . }}
- name: {{ $key }}
value: {{ $value | quote }}
{{- end }}
{{- end }}
containers:
- name: kubectl
{{- $kubectlRegistry := .Values.global.imageRegistry | default .Values.upgradeJob.image.kubectl.registry -}}
{{- $defaultKubernetesVersion := regexFind "v\\d+\\.\\d+\\.\\d+" .Capabilities.KubeVersion.Version }}
{{- if .Values.upgradeJob.image.kubectl.sha }}
image: "{{ $kubectlRegistry }}/{{ .Values.upgradeJob.image.kubectl.repository }}:{{ .Values.upgradeJob.image.kubectl.tag | default $defaultKubernetesVersion }}@sha256:{{ .Values.upgradeJob.image.kubectl.sha }}"
{{- else }}
image: "{{ $kubectlRegistry }}/{{ .Values.upgradeJob.image.kubectl.repository }}:{{ .Values.upgradeJob.image.kubectl.tag | default $defaultKubernetesVersion }}"
{{- end }}
imagePullPolicy: "{{ .Values.upgradeJob.image.kubectl.pullPolicy }}"
command:
- kubectl
args:
- apply
- --server-side
{{- if .Values.upgradeJob.forceConflicts }}
- --force-conflicts
{{- end }}
- --filename
- /tmp/crds.yaml
{{- with .Values.upgradeJob.resources }}
resources:
{{- toYaml . | nindent 12 }}
{{- end }}
{{- with .Values.upgradeJob.containerSecurityContext }}
securityContext:
{{- toYaml . | nindent 12 }}
{{- end }}
volumeMounts:
- mountPath: /tmp/
name: tmp
{{- with .Values.upgradeJob.extraVolumeMounts }}
{{- toYaml . | nindent 12 }}
{{- end }}
{{- with .Values.upgradeJob.env }}
env:
{{- range $key, $value := . }}
- name: {{ $key }}
value: {{ $value | quote }}
{{- end }}
{{- end }}
volumes:
- name: tmp
emptyDir: {}
- name: crds
configMap:
name: {{ template "kube-prometheus-stack.crd.upgradeJob.name" . }}
{{- with .Values.upgradeJob.extraVolumes }}
{{- toYaml . | nindent 8 }}
{{- end }}
restartPolicy: OnFailure
{{- with .Values.upgradeJob.podSecurityContext }}
securityContext:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.upgradeJob.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.upgradeJob.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.upgradeJob.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.upgradeJob.topologySpreadConstraints }}
topologySpreadConstraints:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,20 @@
{{- if and .Values.upgradeJob.enabled .Values.upgradeJob.serviceAccount.create }}
apiVersion: v1
kind: ServiceAccount
automountServiceAccountToken: {{ .Values.upgradeJob.serviceAccount.automountServiceAccountToken }}
metadata:
name: {{ include "kube-prometheus-stack.crd.upgradeJob.serviceAccountName" . }}
namespace: {{ template "kube-prometheus-stack.namespace" . }}
annotations:
"helm.sh/hook": pre-install,pre-upgrade,pre-rollback
"helm.sh/hook-weight": "-4"
"helm.sh/hook-delete-policy": before-hook-creation,hook-succeeded
{{- with .Values.upgradeJob.serviceAccount.annotations }}
{{- toYaml . | nindent 4 }}
{{- end }}
labels:
{{- include "kube-prometheus-stack.crd.upgradeJob.labels" . | nindent 4 }}
{{- with .Values.upgradeJob.serviceAccount.labels }}
{{- toYaml . | nindent 4 }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,4 @@
## Check out kube-prometheus-stack/values.yaml for more information
## on this parameter
upgradeJob:
enabled: false

View File

@ -0,0 +1,35 @@
annotations:
artifacthub.io/license: Apache-2.0
artifacthub.io/links: |
- name: Chart Source
url: https://github.com/grafana/helm-charts
- name: Upstream Project
url: https://github.com/grafana/grafana
apiVersion: v2
appVersion: 12.0.0-security-01
description: The leading tool for querying and visualizing time series and metrics.
home: https://grafana.com
icon: https://artifacthub.io/image/b4fed1a7-6c8f-4945-b99d-096efa3e4116
keywords:
- monitoring
- metric
kubeVersion: ^1.8.0-0
maintainers:
- email: zanhsieh@gmail.com
name: zanhsieh
- email: rluckie@cisco.com
name: rtluckie
- email: maor.friedman@redhat.com
name: maorfr
- email: miroslav.hadzhiev@gmail.com
name: Xtigyro
- email: mail@torstenwalter.de
name: torstenwalter
- email: github@jkroepke.de
name: jkroepke
name: grafana
sources:
- https://github.com/grafana/grafana
- https://github.com/grafana/helm-charts
type: application
version: 9.2.1

View File

@ -0,0 +1,797 @@
# Grafana Helm Chart
* Installs the web dashboarding system [Grafana](http://grafana.org/)
## Get Repo Info
```console
helm repo add grafana https://grafana.github.io/helm-charts
helm repo update
```
_See [helm repo](https://helm.sh/docs/helm/helm_repo/) for command documentation._
## Installing the Chart
To install the chart with the release name `my-release`:
```console
helm install my-release grafana/grafana
```
## Uninstalling the Chart
To uninstall/delete the my-release deployment:
```console
helm delete my-release
```
The command removes all the Kubernetes components associated with the chart and deletes the release.
## Upgrading an existing Release to a new major version
A major chart version change (like v1.2.3 -> v2.0.0) indicates that there is an
incompatible breaking change needing manual actions.
### To 4.0.0 (And 3.12.1)
This version requires Helm >= 2.12.0.
### To 5.0.0
You have to add --force to your helm upgrade command as the labels of the chart have changed.
### To 6.0.0
This version requires Helm >= 3.1.0.
### To 7.0.0
For consistency with other Helm charts, the `global.image.registry` parameter was renamed
to `global.imageRegistry`. If you were not previously setting `global.image.registry`, no action
is required on upgrade. If you were previously setting `global.image.registry`, you will
need to instead set `global.imageRegistry`.
## Configuration
| Parameter | Description | Default |
|-------------------------------------------|-----------------------------------------------|---------------------------------------------------------|
| `replicas` | Number of nodes | `1` |
| `podDisruptionBudget.minAvailable` | Pod disruption minimum available | `nil` |
| `podDisruptionBudget.maxUnavailable` | Pod disruption maximum unavailable | `nil` |
| `podDisruptionBudget.apiVersion` | Pod disruption apiVersion | `nil` |
| `deploymentStrategy` | Deployment strategy | `{ "type": "RollingUpdate" }` |
| `livenessProbe` | Liveness Probe settings | `{ "httpGet": { "path": "/api/health", "port": 3000 } "initialDelaySeconds": 60, "timeoutSeconds": 30, "failureThreshold": 10 }` |
| `readinessProbe` | Readiness Probe settings | `{ "httpGet": { "path": "/api/health", "port": 3000 } }`|
| `securityContext` | Deployment securityContext | `{"runAsUser": 472, "runAsGroup": 472, "fsGroup": 472}` |
| `priorityClassName` | Name of Priority Class to assign pods | `nil` |
| `image.registry` | Image registry | `docker.io` |
| `image.repository` | Image repository | `grafana/grafana` |
| `image.tag` | Overrides the Grafana image tag whose default is the chart appVersion (`Must be >= 5.0.0`) | `` |
| `image.sha` | Image sha (optional) | `` |
| `image.pullPolicy` | Image pull policy | `IfNotPresent` |
| `image.pullSecrets` | Image pull secrets (can be templated) | `[]` |
| `service.enabled` | Enable grafana service | `true` |
| `service.ipFamilies` | Kubernetes service IP families | `[]` |
| `service.ipFamilyPolicy` | Kubernetes service IP family policy | `""` |
| `service.sessionAffinity` | Kubernetes service session affinity config | `""` |
| `service.type` | Kubernetes service type | `ClusterIP` |
| `service.port` | Kubernetes port where service is exposed | `80` |
| `service.portName` | Name of the port on the service | `service` |
| `service.appProtocol` | Adds the appProtocol field to the service | `` |
| `service.targetPort` | Internal service is port | `3000` |
| `service.nodePort` | Kubernetes service nodePort | `nil` |
| `service.annotations` | Service annotations (can be templated) | `{}` |
| `service.labels` | Custom labels | `{}` |
| `service.clusterIP` | internal cluster service IP | `nil` |
| `service.loadBalancerIP` | IP address to assign to load balancer (if supported) | `nil` |
| `service.loadBalancerSourceRanges` | list of IP CIDRs allowed access to lb (if supported) | `[]` |
| `service.externalIPs` | service external IP addresses | `[]` |
| `service.externalTrafficPolicy` | change the default externalTrafficPolicy | `nil` |
| `headlessService` | Create a headless service | `false` |
| `extraExposePorts` | Additional service ports for sidecar containers| `[]` |
| `hostAliases` | adds rules to the pod's /etc/hosts | `[]` |
| `ingress.enabled` | Enables Ingress | `false` |
| `ingress.annotations` | Ingress annotations (values are templated) | `{}` |
| `ingress.labels` | Custom labels | `{}` |
| `ingress.path` | Ingress accepted path | `/` |
| `ingress.pathType` | Ingress type of path | `Prefix` |
| `ingress.hosts` | Ingress accepted hostnames | `["chart-example.local"]` |
| `ingress.extraPaths` | Ingress extra paths to prepend to every host configuration. Useful when configuring [custom actions with AWS ALB Ingress Controller](https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.6/guide/ingress/annotations/#actions). Requires `ingress.hosts` to have one or more host entries. | `[]` |
| `ingress.tls` | Ingress TLS configuration | `[]` |
| `ingress.ingressClassName` | Ingress Class Name. MAY be required for Kubernetes versions >= 1.18 | `""` |
| `resources` | CPU/Memory resource requests/limits | `{}` |
| `nodeSelector` | Node labels for pod assignment | `{}` |
| `tolerations` | Toleration labels for pod assignment | `[]` |
| `affinity` | Affinity settings for pod assignment | `{}` |
| `extraInitContainers` | Init containers to add to the grafana pod | `{}` |
| `extraContainers` | Sidecar containers to add to the grafana pod | `""` |
| `extraContainerVolumes` | Volumes that can be mounted in sidecar containers | `[]` |
| `extraLabels` | Custom labels for all manifests | `{}` |
| `schedulerName` | Name of the k8s scheduler (other than default) | `nil` |
| `persistence.enabled` | Use persistent volume to store data | `false` |
| `persistence.type` | Type of persistence (`pvc` or `statefulset`) | `pvc` |
| `persistence.size` | Size of persistent volume claim | `10Gi` |
| `persistence.existingClaim` | Use an existing PVC to persist data (can be templated) | `nil` |
| `persistence.storageClassName` | Type of persistent volume claim | `nil` |
| `persistence.accessModes` | Persistence access modes | `[ReadWriteOnce]` |
| `persistence.annotations` | PersistentVolumeClaim annotations | `{}` |
| `persistence.finalizers` | PersistentVolumeClaim finalizers | `[ "kubernetes.io/pvc-protection" ]` |
| `persistence.extraPvcLabels` | Extra labels to apply to a PVC. | `{}` |
| `persistence.subPath` | Mount a sub dir of the persistent volume (can be templated) | `nil` |
| `persistence.inMemory.enabled` | If persistence is not enabled, whether to mount the local storage in-memory to improve performance | `false` |
| `persistence.inMemory.sizeLimit` | SizeLimit for the in-memory local storage | `nil` |
| `persistence.disableWarning` | Hide NOTES warning, useful when persisting to a database | `false` |
| `initChownData.enabled` | If false, don't reset data ownership at startup | true |
| `initChownData.image.registry` | init-chown-data container image registry | `docker.io` |
| `initChownData.image.repository` | init-chown-data container image repository | `busybox` |
| `initChownData.image.tag` | init-chown-data container image tag | `1.31.1` |
| `initChownData.image.sha` | init-chown-data container image sha (optional)| `""` |
| `initChownData.image.pullPolicy` | init-chown-data container image pull policy | `IfNotPresent` |
| `initChownData.resources` | init-chown-data pod resource requests & limits | `{}` |
| `initChownData.securityContext` | init-chown-data pod securityContext | `{"readOnlyRootFilesystem": false, "runAsNonRoot": false}`, "runAsUser": 0, "seccompProfile": {"type": "RuntimeDefault"}, "capabilities": {"add": ["CHOWN"], "drop": ["ALL"]}}` |
| `schedulerName` | Alternate scheduler name | `nil` |
| `env` | Extra environment variables passed to pods | `{}` |
| `envValueFrom` | Environment variables from alternate sources. See the API docs on [EnvVarSource](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.17/#envvarsource-v1-core) for format details. Can be templated | `{}` |
| `envFromSecret` | Name of a Kubernetes secret (must be manually created in the same namespace) containing values to be added to the environment. Can be templated | `""` |
| `envFromSecrets` | List of Kubernetes secrets (must be manually created in the same namespace) containing values to be added to the environment. Can be templated | `[]` |
| `envFromConfigMaps` | List of Kubernetes ConfigMaps (must be manually created in the same namespace) containing values to be added to the environment. Can be templated | `[]` |
| `envRenderSecret` | Sensible environment variables passed to pods and stored as secret. (passed through [tpl](https://helm.sh/docs/howto/charts_tips_and_tricks/#using-the-tpl-function)) | `{}` |
| `enableServiceLinks` | Inject Kubernetes services as environment variables. | `true` |
| `extraSecretMounts` | Additional grafana server secret mounts | `[]` |
| `extraVolumeMounts` | Additional grafana server volume mounts | `[]` |
| `extraVolumes` | Additional Grafana server volumes | `[]` |
| `automountServiceAccountToken` | Mounted the service account token on the grafana pod. Mandatory, if sidecars are enabled | `true` |
| `createConfigmap` | Enable creating the grafana configmap | `true` |
| `extraConfigmapMounts` | Additional grafana server configMap volume mounts (values are templated) | `[]` |
| `extraEmptyDirMounts` | Additional grafana server emptyDir volume mounts | `[]` |
| `plugins` | Plugins to be loaded along with Grafana | `[]` |
| `datasources` | Configure grafana datasources (passed through tpl) | `{}` |
| `alerting` | Configure grafana alerting (passed through tpl) | `{}` |
| `notifiers` | Configure grafana notifiers | `{}` |
| `dashboardProviders` | Configure grafana dashboard providers | `{}` |
| `defaultCurlOptions` | Configure default curl short options for all dashboards, the beginning dash is required | `-skf` |
| `dashboards` | Dashboards to import | `{}` |
| `dashboardsConfigMaps` | ConfigMaps reference that contains dashboards | `{}` |
| `grafana.ini` | Grafana's primary configuration | `{}` |
| `global.imageRegistry` | Global image pull registry for all images. | `null` |
| `global.imagePullSecrets` | Global image pull secrets (can be templated). Allows either an array of {name: pullSecret} maps (k8s-style), or an array of strings (more common helm-style). | `[]` |
| `ldap.enabled` | Enable LDAP authentication | `false` |
| `ldap.existingSecret` | The name of an existing secret containing the `ldap.toml` file, this must have the key `ldap-toml`. | `""` |
| `ldap.config` | Grafana's LDAP configuration | `""` |
| `annotations` | Deployment annotations | `{}` |
| `labels` | Deployment labels | `{}` |
| `podAnnotations` | Pod annotations | `{}` |
| `podLabels` | Pod labels | `{}` |
| `podPortName` | Name of the grafana port on the pod | `grafana` |
| `lifecycleHooks` | Lifecycle hooks for podStart and preStop [Example](https://kubernetes.io/docs/tasks/configure-pod-container/attach-handler-lifecycle-event/#define-poststart-and-prestop-handlers) | `{}` |
| `sidecar.image.registry` | Sidecar image registry | `quay.io` |
| `sidecar.image.repository` | Sidecar image repository | `kiwigrid/k8s-sidecar` |
| `sidecar.image.tag` | Sidecar image tag | `1.30.0` |
| `sidecar.image.sha` | Sidecar image sha (optional) | `""` |
| `sidecar.imagePullPolicy` | Sidecar image pull policy | `IfNotPresent` |
| `sidecar.resources` | Sidecar resources | `{}` |
| `sidecar.securityContext` | Sidecar securityContext | `{}` |
| `sidecar.enableUniqueFilenames` | Sets the kiwigrid/k8s-sidecar UNIQUE_FILENAMES environment variable. If set to `true` the sidecar will create unique filenames where duplicate data keys exist between ConfigMaps and/or Secrets within the same or multiple Namespaces. | `false` |
| `sidecar.alerts.enabled` | Enables the cluster wide search for alerts and adds/updates/deletes them in grafana |`false` |
| `sidecar.alerts.label` | Label that config maps with alerts should have to be added (can be templated) | `grafana_alert` |
| `sidecar.alerts.labelValue` | Label value that config maps with alerts should have to be added (can be templated) | `""` |
| `sidecar.alerts.searchNamespace` | Namespaces list. If specified, the sidecar will search for alerts config-maps inside these namespaces. Otherwise the namespace in which the sidecar is running will be used. It's also possible to specify ALL to search in all namespaces. | `nil` |
| `sidecar.alerts.watchMethod` | Method to use to detect ConfigMap changes. With WATCH the sidecar will do a WATCH requests, with SLEEP it will list all ConfigMaps, then sleep for 60 seconds. | `WATCH` |
| `sidecar.alerts.resource` | Should the sidecar looks into secrets, configmaps or both. | `both` |
| `sidecar.alerts.reloadURL` | Full url of datasource configuration reload API endpoint, to invoke after a config-map change | `"http://localhost:3000/api/admin/provisioning/alerting/reload"` |
| `sidecar.alerts.skipReload` | Enabling this omits defining the REQ_URL and REQ_METHOD environment variables | `false` |
| `sidecar.alerts.initAlerts` | Set to true to deploy the alerts sidecar as an initContainer. This is needed if skipReload is true, to load any alerts defined at startup time. | `false` |
| `sidecar.alerts.extraMounts` | Additional alerts sidecar volume mounts. | `[]` |
| `sidecar.dashboards.enabled` | Enables the cluster wide search for dashboards and adds/updates/deletes them in grafana | `false` |
| `sidecar.dashboards.SCProvider` | Enables creation of sidecar provider | `true` |
| `sidecar.dashboards.provider.name` | Unique name of the grafana provider | `sidecarProvider` |
| `sidecar.dashboards.provider.orgid` | Id of the organisation, to which the dashboards should be added | `1` |
| `sidecar.dashboards.provider.folder` | Logical folder in which grafana groups dashboards | `""` |
| `sidecar.dashboards.provider.folderUid` | Allows you to specify the static UID for the logical folder above | `""` |
| `sidecar.dashboards.provider.disableDelete` | Activate to avoid the deletion of imported dashboards | `false` |
| `sidecar.dashboards.provider.allowUiUpdates` | Allow updating provisioned dashboards from the UI | `false` |
| `sidecar.dashboards.provider.type` | Provider type | `file` |
| `sidecar.dashboards.provider.foldersFromFilesStructure` | Allow Grafana to replicate dashboard structure from filesystem. | `false` |
| `sidecar.dashboards.watchMethod` | Method to use to detect ConfigMap changes. With WATCH the sidecar will do a WATCH requests, with SLEEP it will list all ConfigMaps, then sleep for 60 seconds. | `WATCH` |
| `sidecar.skipTlsVerify` | Set to true to skip tls verification for kube api calls | `nil` |
| `sidecar.dashboards.label` | Label that config maps with dashboards should have to be added (can be templated) | `grafana_dashboard` |
| `sidecar.dashboards.labelValue` | Label value that config maps with dashboards should have to be added (can be templated) | `""` |
| `sidecar.dashboards.folder` | Folder in the pod that should hold the collected dashboards (unless `sidecar.dashboards.defaultFolderName` is set). This path will be mounted. | `/tmp/dashboards` |
| `sidecar.dashboards.folderAnnotation` | The annotation the sidecar will look for in configmaps to override the destination folder for files | `nil` |
| `sidecar.dashboards.defaultFolderName` | The default folder name, it will create a subfolder under the `sidecar.dashboards.folder` and put dashboards in there instead | `nil` |
| `sidecar.dashboards.searchNamespace` | Namespaces list. If specified, the sidecar will search for dashboards config-maps inside these namespaces. Otherwise the namespace in which the sidecar is running will be used. It's also possible to specify ALL to search in all namespaces. | `nil` |
| `sidecar.dashboards.script` | Absolute path to shell script to execute after a configmap got reloaded. | `nil` |
| `sidecar.dashboards.reloadURL` | Full url of dashboards configuration reload API endpoint, to invoke after a config-map change | `"http://localhost:3000/api/admin/provisioning/dashboards/reload"` |
| `sidecar.dashboards.skipReload` | Enabling this omits defining the REQ_USERNAME, REQ_PASSWORD, REQ_URL and REQ_METHOD environment variables | `false` |
| `sidecar.dashboards.resource` | Should the sidecar looks into secrets, configmaps or both. | `both` |
| `sidecar.dashboards.extraMounts` | Additional dashboard sidecar volume mounts. | `[]` |
| `sidecar.datasources.enabled` | Enables the cluster wide search for datasources and adds/updates/deletes them in grafana |`false` |
| `sidecar.datasources.label` | Label that config maps with datasources should have to be added (can be templated) | `grafana_datasource` |
| `sidecar.datasources.labelValue` | Label value that config maps with datasources should have to be added (can be templated) | `""` |
| `sidecar.datasources.searchNamespace` | Namespaces list. If specified, the sidecar will search for datasources config-maps inside these namespaces. Otherwise the namespace in which the sidecar is running will be used. It's also possible to specify ALL to search in all namespaces. | `nil` |
| `sidecar.datasources.watchMethod` | Method to use to detect ConfigMap changes. With WATCH the sidecar will do a WATCH requests, with SLEEP it will list all ConfigMaps, then sleep for 60 seconds. | `WATCH` |
| `sidecar.datasources.resource` | Should the sidecar looks into secrets, configmaps or both. | `both` |
| `sidecar.datasources.reloadURL` | Full url of datasource configuration reload API endpoint, to invoke after a config-map change | `"http://localhost:3000/api/admin/provisioning/datasources/reload"` |
| `sidecar.datasources.skipReload` | Enabling this omits defining the REQ_URL and REQ_METHOD environment variables | `false` |
| `sidecar.datasources.initDatasources` | Set to true to deploy the datasource sidecar as an initContainer in addition to a container. This is needed if skipReload is true, to load any datasources defined at startup time. | `false` |
| `sidecar.notifiers.enabled` | Enables the cluster wide search for notifiers and adds/updates/deletes them in grafana | `false` |
| `sidecar.notifiers.label` | Label that config maps with notifiers should have to be added (can be templated) | `grafana_notifier` |
| `sidecar.notifiers.labelValue` | Label value that config maps with notifiers should have to be added (can be templated) | `""` |
| `sidecar.notifiers.searchNamespace` | Namespaces list. If specified, the sidecar will search for notifiers config-maps (or secrets) inside these namespaces. Otherwise the namespace in which the sidecar is running will be used. It's also possible to specify ALL to search in all namespaces. | `nil` |
| `sidecar.notifiers.watchMethod` | Method to use to detect ConfigMap changes. With WATCH the sidecar will do a WATCH requests, with SLEEP it will list all ConfigMaps, then sleep for 60 seconds. | `WATCH` |
| `sidecar.notifiers.resource` | Should the sidecar looks into secrets, configmaps or both. | `both` |
| `sidecar.notifiers.reloadURL` | Full url of notifier configuration reload API endpoint, to invoke after a config-map change | `"http://localhost:3000/api/admin/provisioning/notifications/reload"` |
| `sidecar.notifiers.skipReload` | Enabling this omits defining the REQ_URL and REQ_METHOD environment variables | `false` |
| `sidecar.notifiers.initNotifiers` | Set to true to deploy the notifier sidecar as an initContainer in addition to a container. This is needed if skipReload is true, to load any notifiers defined at startup time. | `false` |
| `smtp.existingSecret` | The name of an existing secret containing the SMTP credentials. | `""` |
| `smtp.userKey` | The key in the existing SMTP secret containing the username. | `"user"` |
| `smtp.passwordKey` | The key in the existing SMTP secret containing the password. | `"password"` |
| `admin.existingSecret` | The name of an existing secret containing the admin credentials (can be templated). | `""` |
| `admin.userKey` | The key in the existing admin secret containing the username. | `"admin-user"` |
| `admin.passwordKey` | The key in the existing admin secret containing the password. | `"admin-password"` |
| `serviceAccount.automountServiceAccountToken` | Automount the service account token on all pods where is service account is used | `false` |
| `serviceAccount.annotations` | ServiceAccount annotations | |
| `serviceAccount.create` | Create service account | `true` |
| `serviceAccount.labels` | ServiceAccount labels | `{}` |
| `serviceAccount.name` | Service account name to use, when empty will be set to created account if `serviceAccount.create` is set else to `default` | `` |
| `serviceAccount.nameTest` | Service account name to use for test, when empty will be set to created account if `serviceAccount.create` is set else to `default` | `nil` |
| `rbac.create` | Create and use RBAC resources | `true` |
| `rbac.namespaced` | Creates Role and Rolebinding instead of the default ClusterRole and ClusteRoleBindings for the grafana instance | `false` |
| `rbac.useExistingRole` | Set to a rolename to use existing role - skipping role creating - but still doing serviceaccount and rolebinding to the rolename set here. | `nil` |
| `rbac.pspEnabled` | Create PodSecurityPolicy (with `rbac.create`, grant roles permissions as well) | `false` |
| `rbac.pspUseAppArmor` | Enforce AppArmor in created PodSecurityPolicy (requires `rbac.pspEnabled`) | `false` |
| `rbac.extraRoleRules` | Additional rules to add to the Role | [] |
| `rbac.extraClusterRoleRules` | Additional rules to add to the ClusterRole | [] |
| `command` | Define command to be executed by grafana container at startup | `nil` |
| `args` | Define additional args if command is used | `nil` |
| `testFramework.enabled` | Whether to create test-related resources | `true` |
| `testFramework.image.registry` | `test-framework` image registry. | `docker.io` |
| `testFramework.image.repository` | `test-framework` image repository. | `bats/bats` |
| `testFramework.image.tag` | `test-framework` image tag. | `v1.4.1` |
| `testFramework.imagePullPolicy` | `test-framework` image pull policy. | `IfNotPresent` |
| `testFramework.securityContext` | `test-framework` securityContext | `{}` |
| `downloadDashboards.env` | Environment variables to be passed to the `download-dashboards` container | `{}` |
| `downloadDashboards.envFromSecret` | Name of a Kubernetes secret (must be manually created in the same namespace) containing values to be added to the environment. Can be templated | `""` |
| `downloadDashboards.resources` | Resources of `download-dashboards` container | `{}` |
| `downloadDashboardsImage.registry` | Curl docker image registry | `docker.io` |
| `downloadDashboardsImage.repository` | Curl docker image repository | `curlimages/curl` |
| `downloadDashboardsImage.tag` | Curl docker image tag | `8.9.1` |
| `downloadDashboardsImage.sha` | Curl docker image sha (optional) | `""` |
| `downloadDashboardsImage.pullPolicy` | Curl docker image pull policy | `IfNotPresent` |
| `namespaceOverride` | Override the deployment namespace | `""` (`Release.Namespace`) |
| `serviceMonitor.enabled` | Use servicemonitor from prometheus operator | `false` |
| `serviceMonitor.namespace` | Namespace this servicemonitor is installed in | |
| `serviceMonitor.interval` | How frequently Prometheus should scrape | `1m` |
| `serviceMonitor.path` | Path to scrape | `/metrics` |
| `serviceMonitor.scheme` | Scheme to use for metrics scraping | `http` |
| `serviceMonitor.tlsConfig` | TLS configuration block for the endpoint | `{}` |
| `serviceMonitor.labels` | Labels for the servicemonitor passed to Prometheus Operator | `{}` |
| `serviceMonitor.scrapeTimeout` | Timeout after which the scrape is ended | `30s` |
| `serviceMonitor.relabelings` | RelabelConfigs to apply to samples before scraping. | `[]` |
| `serviceMonitor.metricRelabelings` | MetricRelabelConfigs to apply to samples before ingestion. | `[]` |
| `revisionHistoryLimit` | Number of old ReplicaSets to retain | `10` |
| `imageRenderer.enabled` | Enable the image-renderer deployment & service | `false` |
| `imageRenderer.image.registry` | image-renderer Image registry | `docker.io` |
| `imageRenderer.image.repository` | image-renderer Image repository | `grafana/grafana-image-renderer` |
| `imageRenderer.image.tag` | image-renderer Image tag | `latest` |
| `imageRenderer.image.sha` | image-renderer Image sha (optional) | `""` |
| `imageRenderer.image.pullSecrets` | image-renderer Image pull secrets (optional) | `[]` |
| `imageRenderer.image.pullPolicy` | image-renderer ImagePullPolicy | `Always` |
| `imageRenderer.env` | extra env-vars for image-renderer | `{}` |
| `imageRenderer.envValueFrom` | Environment variables for image-renderer from alternate sources. See the API docs on [EnvVarSource](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.17/#envvarsource-v1-core) for format details. Can be templated | `{}` |
| `imageRenderer.extraConfigmapMounts` | Additional image-renderer configMap volume mounts (values are templated) | `[]` |
| `imageRenderer.extraSecretMounts` | Additional image-renderer secret volume mounts | `[]` |
| `imageRenderer.extraVolumeMounts` | Additional image-renderer volume mounts | `[]` |
| `imageRenderer.extraVolumes` | Additional image-renderer volumes | `[]` |
| `imageRenderer.serviceAccountName` | image-renderer deployment serviceAccountName | `""` |
| `imageRenderer.securityContext` | image-renderer deployment securityContext | `{}` |
| `imageRenderer.podAnnotations` | image-renderer image-renderer pod annotation | `{}` |
| `imageRenderer.hostAliases` | image-renderer deployment Host Aliases | `[]` |
| `imageRenderer.priorityClassName` | image-renderer deployment priority class | `''` |
| `imageRenderer.service.enabled` | Enable the image-renderer service | `true` |
| `imageRenderer.service.portName` | image-renderer service port name | `http` |
| `imageRenderer.service.port` | image-renderer port used by deployment | `8081` |
| `imageRenderer.service.targetPort` | image-renderer service port used by service | `8081` |
| `imageRenderer.appProtocol` | Adds the appProtocol field to the service | `` |
| `imageRenderer.grafanaSubPath` | Grafana sub path to use for image renderer callback url | `''` |
| `imageRenderer.serverURL` | Remote image renderer url | `''` |
| `imageRenderer.renderingCallbackURL` | Callback url for the Grafana image renderer | `''` |
| `imageRenderer.podPortName` | name of the image-renderer port on the pod | `http` |
| `imageRenderer.revisionHistoryLimit` | number of image-renderer replica sets to keep | `10` |
| `imageRenderer.networkPolicy.limitIngress` | Enable a NetworkPolicy to limit inbound traffic from only the created grafana pods | `true` |
| `imageRenderer.networkPolicy.limitEgress` | Enable a NetworkPolicy to limit outbound traffic to only the created grafana pods | `false` |
| `imageRenderer.resources` | Set resource limits for image-renderer pods | `{}` |
| `imageRenderer.nodeSelector` | Node labels for pod assignment | `{}` |
| `imageRenderer.tolerations` | Toleration labels for pod assignment | `[]` |
| `imageRenderer.affinity` | Affinity settings for pod assignment | `{}` |
| `networkPolicy.enabled` | Enable creation of NetworkPolicy resources. | `false` |
| `networkPolicy.allowExternal` | Don't require client label for connections | `true` |
| `networkPolicy.explicitNamespacesSelector` | A Kubernetes LabelSelector to explicitly select namespaces from which traffic could be allowed | `{}` |
| `networkPolicy.ingress` | Enable the creation of an ingress network policy | `true` |
| `networkPolicy.egress.enabled` | Enable the creation of an egress network policy | `false` |
| `networkPolicy.egress.ports` | An array of ports to allow for the egress | `[]` |
| `enableKubeBackwardCompatibility` | Enable backward compatibility of kubernetes where pod's defintion version below 1.13 doesn't have the enableServiceLinks option | `false` |
### Example ingress with path
With grafana 6.3 and above
```yaml
grafana.ini:
server:
domain: monitoring.example.com
root_url: "%(protocol)s://%(domain)s/grafana"
serve_from_sub_path: true
ingress:
enabled: true
hosts:
- "monitoring.example.com"
path: "/grafana"
```
### Example of extraVolumeMounts and extraVolumes
Configure additional volumes with `extraVolumes` and volume mounts with `extraVolumeMounts`.
Example for `extraVolumeMounts` and corresponding `extraVolumes`:
```yaml
extraVolumeMounts:
- name: plugins
mountPath: /var/lib/grafana/plugins
subPath: configs/grafana/plugins
readOnly: false
- name: dashboards
mountPath: /var/lib/grafana/dashboards
hostPath: /usr/shared/grafana/dashboards
readOnly: false
extraVolumes:
- name: plugins
existingClaim: existing-grafana-claim
- name: dashboards
hostPath: /usr/shared/grafana/dashboards
```
Volumes default to `emptyDir`. Set to `persistentVolumeClaim`,
`hostPath`, `csi`, or `configMap` for other types. For a
`persistentVolumeClaim`, specify an existing claim name with
`existingClaim`.
## Import dashboards
There are a few methods to import dashboards to Grafana. Below are some examples and explanations as to how to use each method:
```yaml
dashboards:
default:
some-dashboard:
json: |
{
"annotations":
...
# Complete json file here
...
"title": "Some Dashboard",
"uid": "abcd1234",
"version": 1
}
custom-dashboard:
# This is a path to a file inside the dashboards directory inside the chart directory
file: dashboards/custom-dashboard.json
prometheus-stats:
# Ref: https://grafana.com/dashboards/2
gnetId: 2
revision: 2
datasource: Prometheus
loki-dashboard-quick-search:
gnetId: 12019
revision: 2
datasource:
- name: DS_PROMETHEUS
value: Prometheus
- name: DS_LOKI
value: Loki
local-dashboard:
url: https://github.com/cloudnative-pg/grafana-dashboards/blob/main/charts/cluster/grafana-dashboard.json
# redirects to:
# https://raw.githubusercontent.com/cloudnative-pg/grafana-dashboards/refs/heads/main/charts/cluster/grafana-dashboard.json
# default: -skf
# -s - silent mode
# -k - allow insecure (eg: non-TLS) connections
# -f - fail fast
# -L - follow HTTP redirects
curlOptions: -Lf
```
## BASE64 dashboards
Dashboards could be stored on a server that does not return JSON directly and instead of it returns a Base64 encoded file (e.g. Gerrit)
A new parameter has been added to the url use case so if you specify a b64content value equals to true after the url entry a Base64 decoding is applied before save the file to disk.
If this entry is not set or is equals to false not decoding is applied to the file before saving it to disk.
### Gerrit use case
Gerrit API for download files has the following schema: <https://yourgerritserver/a/{project-name}/branches/{branch-id}/files/{file-id}/content> where {project-name} and
{file-id} usually has '/' in their values and so they MUST be replaced by %2F so if project-name is user/repo, branch-id is master and file-id is equals to dir1/dir2/dashboard
the url value is <https://yourgerritserver/a/user%2Frepo/branches/master/files/dir1%2Fdir2%2Fdashboard/content>
## Sidecar for dashboards
If the parameter `sidecar.dashboards.enabled` is set, a sidecar container is deployed in the grafana
pod. This container watches all configmaps (or secrets) in the cluster and filters out the ones with
a label as defined in `sidecar.dashboards.label`. The files defined in those configmaps are written
to a folder and accessed by grafana. Changes to the configmaps are monitored and the imported
dashboards are deleted/updated.
A recommendation is to use one configmap per dashboard, as a reduction of multiple dashboards inside
one configmap is currently not properly mirrored in grafana.
Example dashboard config:
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: sample-grafana-dashboard
labels:
grafana_dashboard: "1"
data:
k8s-dashboard.json: |-
[...]
```
## Sidecar for datasources
If the parameter `sidecar.datasources.enabled` is set, an init container is deployed in the grafana
pod. This container lists all secrets (or configmaps, though not recommended) in the cluster and
filters out the ones with a label as defined in `sidecar.datasources.label`. The files defined in
those secrets are written to a folder and accessed by grafana on startup. Using these yaml files,
the data sources in grafana can be imported.
Should you aim for reloading datasources in Grafana each time the config is changed, set `sidecar.datasources.skipReload: false` and adjust `sidecar.datasources.reloadURL` to `http://<svc-name>.<namespace>.svc.cluster.local/api/admin/provisioning/datasources/reload`.
Secrets are recommended over configmaps for this usecase because datasources usually contain private
data like usernames and passwords. Secrets are the more appropriate cluster resource to manage those.
Example values to add a postgres datasource as a kubernetes secret:
```yaml
apiVersion: v1
kind: Secret
metadata:
name: grafana-datasources
labels:
grafana_datasource: 'true' # default value for: sidecar.datasources.label
stringData:
pg-db.yaml: |-
apiVersion: 1
datasources:
- name: My pg db datasource
type: postgres
url: my-postgresql-db:5432
user: db-readonly-user
secureJsonData:
password: 'SUperSEcretPa$$word'
jsonData:
database: my_datase
sslmode: 'disable' # disable/require/verify-ca/verify-full
maxOpenConns: 0 # Grafana v5.4+
maxIdleConns: 2 # Grafana v5.4+
connMaxLifetime: 14400 # Grafana v5.4+
postgresVersion: 1000 # 903=9.3, 904=9.4, 905=9.5, 906=9.6, 1000=10
timescaledb: false
# <bool> allow users to edit datasources from the UI.
editable: false
```
Example values to add a datasource adapted from [Grafana](http://docs.grafana.org/administration/provisioning/#example-datasource-config-file):
```yaml
datasources:
datasources.yaml:
apiVersion: 1
datasources:
# <string, required> name of the datasource. Required
- name: Graphite
# <string, required> datasource type. Required
type: graphite
# <string, required> access mode. proxy or direct (Server or Browser in the UI). Required
access: proxy
# <int> org id. will default to orgId 1 if not specified
orgId: 1
# <string> url
url: http://localhost:8080
# <string> database password, if used
password:
# <string> database user, if used
user:
# <string> database name, if used
database:
# <bool> enable/disable basic auth
basicAuth:
# <string> basic auth username
basicAuthUser:
# <string> basic auth password
basicAuthPassword:
# <bool> enable/disable with credentials headers
withCredentials:
# <bool> mark as default datasource. Max one per org
isDefault:
# <map> fields that will be converted to json and stored in json_data
jsonData:
graphiteVersion: "1.1"
tlsAuth: true
tlsAuthWithCACert: true
# <string> json object of data that will be encrypted.
secureJsonData:
tlsCACert: "..."
tlsClientCert: "..."
tlsClientKey: "..."
version: 1
# <bool> allow users to edit datasources from the UI.
editable: false
```
## Sidecar for notifiers
If the parameter `sidecar.notifiers.enabled` is set, an init container is deployed in the grafana
pod. This container lists all secrets (or configmaps, though not recommended) in the cluster and
filters out the ones with a label as defined in `sidecar.notifiers.label`. The files defined in
those secrets are written to a folder and accessed by grafana on startup. Using these yaml files,
the notification channels in grafana can be imported. The secrets must be created before
`helm install` so that the notifiers init container can list the secrets.
Secrets are recommended over configmaps for this usecase because alert notification channels usually contain
private data like SMTP usernames and passwords. Secrets are the more appropriate cluster resource to manage those.
Example datasource config adapted from [Grafana](https://grafana.com/docs/grafana/latest/administration/provisioning/#alert-notification-channels):
```yaml
notifiers:
- name: notification-channel-1
type: slack
uid: notifier1
# either
org_id: 2
# or
org_name: Main Org.
is_default: true
send_reminder: true
frequency: 1h
disable_resolve_message: false
# See `Supported Settings` section for settings supporter for each
# alert notification type.
settings:
recipient: 'XXX'
token: 'xoxb'
uploadImage: true
url: https://slack.com
delete_notifiers:
- name: notification-channel-1
uid: notifier1
org_id: 2
- name: notification-channel-2
# default org_id: 1
```
## Sidecar for alerting resources
If the parameter `sidecar.alerts.enabled` is set, a sidecar container is deployed in the grafana
pod. This container watches all configmaps (or secrets) in the cluster (namespace defined by `sidecar.alerts.searchNamespace`) and filters out the ones with
a label as defined in `sidecar.alerts.label` (default is `grafana_alert`). The files defined in those configmaps are written
to a folder and accessed by grafana. Changes to the configmaps are monitored and the imported alerting resources are updated, however, deletions are a little more complicated (see below).
This sidecar can be used to provision alert rules, contact points, notification policies, notification templates and mute timings as shown in [Grafana Documentation](https://grafana.com/docs/grafana/next/alerting/set-up/provision-alerting-resources/file-provisioning/).
To fetch the alert config which will be provisioned, use the alert provisioning API ([Grafana Documentation](https://grafana.com/docs/grafana/next/developers/http_api/alerting_provisioning/)).
You can use either JSON or YAML format.
Example config for an alert rule:
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: sample-grafana-alert
labels:
grafana_alert: "1"
data:
k8s-alert.yml: |-
apiVersion: 1
groups:
- orgId: 1
name: k8s-alert
[...]
```
To delete provisioned alert rules is a two step process, you need to delete the configmap which defined the alert rule
and then create a configuration which deletes the alert rule.
Example deletion configuration:
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: delete-sample-grafana-alert
namespace: monitoring
labels:
grafana_alert: "1"
data:
delete-k8s-alert.yml: |-
apiVersion: 1
deleteRules:
- orgId: 1
uid: 16624780-6564-45dc-825c-8bded4ad92d3
```
## Statically provision alerting resources
If you don't need to change alerting resources (alert rules, contact points, notification policies and notification templates) regularly you could use the `alerting` config option instead of the sidecar option above.
This will grab the alerting config and apply it statically at build time for the helm file.
There are two methods to statically provision alerting configuration in Grafana. Below are some examples and explanations as to how to use each method:
```yaml
alerting:
team1-alert-rules.yaml:
file: alerting/team1/rules.yaml
team2-alert-rules.yaml:
file: alerting/team2/rules.yaml
team3-alert-rules.yaml:
file: alerting/team3/rules.yaml
notification-policies.yaml:
file: alerting/shared/notification-policies.yaml
notification-templates.yaml:
file: alerting/shared/notification-templates.yaml
contactpoints.yaml:
apiVersion: 1
contactPoints:
- orgId: 1
name: Slack channel
receivers:
- uid: default-receiver
type: slack
settings:
# Webhook URL to be filled in
url: ""
# We need to escape double curly braces for the tpl function.
text: '{{ `{{ template "default.message" . }}` }}'
title: '{{ `{{ template "default.title" . }}` }}'
```
The two possibilities for static alerting resource provisioning are:
* Inlining the file contents as shown for contact points in the above example.
* Importing a file using a relative path starting from the chart root directory as shown for the alert rules in the above example.
### Important notes on file provisioning
* The format of the files is defined in the [Grafana documentation](https://grafana.com/docs/grafana/next/alerting/set-up/provision-alerting-resources/file-provisioning/) on file provisioning.
* The chart supports importing YAML and JSON files.
* The filename must be unique, otherwise one volume mount will overwrite the other.
* In case of inlining, double curly braces that arise from the Grafana configuration format and are not intended as templates for the chart must be escaped.
* The number of total files under `alerting:` is not limited. Each file will end up as a volume mount in the corresponding provisioning folder of the deployed Grafana instance.
* The file size for each import is limited by what the function `.Files.Get` can handle, which suffices for most cases.
## How to serve Grafana with a path prefix (/grafana)
In order to serve Grafana with a prefix (e.g., <http://example.com/grafana>), add the following to your values.yaml.
```yaml
ingress:
enabled: true
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/rewrite-target: /$1
nginx.ingress.kubernetes.io/use-regex: "true"
path: /grafana/?(.*)
hosts:
- k8s.example.dev
grafana.ini:
server:
root_url: http://localhost:3000/grafana # this host can be localhost
```
## How to securely reference secrets in grafana.ini
This example uses Grafana [file providers](https://grafana.com/docs/grafana/latest/administration/configuration/#file-provider) for secret values and the `extraSecretMounts` configuration flag (Additional grafana server secret mounts) to mount the secrets.
In grafana.ini:
```yaml
grafana.ini:
[auth.generic_oauth]
enabled = true
client_id = $__file{/etc/secrets/auth_generic_oauth/client_id}
client_secret = $__file{/etc/secrets/auth_generic_oauth/client_secret}
```
Existing secret, or created along with helm:
```yaml
---
apiVersion: v1
kind: Secret
metadata:
name: auth-generic-oauth-secret
type: Opaque
stringData:
client_id: <value>
client_secret: <value>
```
Include in the `extraSecretMounts` configuration flag:
```yaml
extraSecretMounts:
- name: auth-generic-oauth-secret-mount
secretName: auth-generic-oauth-secret
defaultMode: 0440
mountPath: /etc/secrets/auth_generic_oauth
readOnly: true
```
### extraSecretMounts using a Container Storage Interface (CSI) provider
This example uses a CSI driver e.g. retrieving secrets using [Azure Key Vault Provider](https://github.com/Azure/secrets-store-csi-driver-provider-azure)
```yaml
extraSecretMounts:
- name: secrets-store-inline
mountPath: /run/secrets
readOnly: true
csi:
driver: secrets-store.csi.k8s.io
readOnly: true
volumeAttributes:
secretProviderClass: "my-provider"
nodePublishSecretRef:
name: akv-creds
```
## Image Renderer Plug-In
This chart supports enabling [remote image rendering](https://github.com/grafana/grafana-image-renderer/blob/master/README.md#run-in-docker)
```yaml
imageRenderer:
enabled: true
```
### Image Renderer NetworkPolicy
By default the image-renderer pods will have a network policy which only allows ingress traffic from the created grafana instance
### High Availability for unified alerting
If you want to run Grafana in a high availability cluster you need to enable
the headless service by setting `headlessService: true` in your `values.yaml`
file.
As next step you have to setup the `grafana.ini` in your `values.yaml` in a way
that it will make use of the headless service to obtain all the IPs of the
cluster. You should replace ``{{ Name }}`` with the name of your helm deployment.
```yaml
grafana.ini:
...
unified_alerting:
enabled: true
ha_peers: {{ Name }}-headless:9094
ha_listen_address: ${POD_IP}:9094
ha_advertise_address: ${POD_IP}:9094
rule_version_record_limit: "5"
alerting:
enabled: false
```

View File

@ -0,0 +1,53 @@
extraObjects:
- apiVersion: v1
kind: ConfigMap
metadata:
name: '{{ include "grafana.fullname" . }}-test'
data:
var1: "value1"
- apiVersion: v1
kind: Secret
metadata:
name: '{{ include "grafana.fullname" . }}-test'
type: Opaque
data:
var2: "dmFsdWUy"
sidecar:
alerts:
enabled: true
envValueFrom:
VAR1:
configMapKeyRef:
name: '{{ include "grafana.fullname" . }}-test'
key: var1
VAR2:
secretKeyRef:
name: '{{ include "grafana.fullname" . }}-test'
key: var2
VAR3:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
dashboards:
enabled: true
envValueFrom:
VAR1:
configMapKeyRef:
name: '{{ include "grafana.fullname" . }}-test'
key: var1
VAR2:
secretKeyRef:
name: '{{ include "grafana.fullname" . }}-test'
key: var2
datasources:
enabled: true
envValueFrom:
VAR1:
configMapKeyRef:
name: '{{ include "grafana.fullname" . }}-test'
key: var1
VAR2:
secretKeyRef:
name: '{{ include "grafana.fullname" . }}-test'
key: var2

View File

@ -0,0 +1,176 @@
{{/*
Generate config map data
*/}}
{{- define "grafana.configData" -}}
{{ include "grafana.assertNoLeakedSecrets" . }}
{{- $files := .Files }}
{{- $root := . -}}
{{- with .Values.plugins }}
plugins: {{ join "," . }}
{{- end }}
grafana.ini: |
{{- range $elem, $elemVal := index .Values "grafana.ini" }}
{{- if not (kindIs "map" $elemVal) }}
{{- if kindIs "invalid" $elemVal }}
{{ $elem }} =
{{- else if kindIs "slice" $elemVal }}
{{ $elem }} = {{ toJson $elemVal }}
{{- else if kindIs "string" $elemVal }}
{{ $elem }} = {{ tpl $elemVal $ }}
{{- else }}
{{ $elem }} = {{ $elemVal }}
{{- end }}
{{- end }}
{{- end }}
{{- range $key, $value := index .Values "grafana.ini" }}
{{- if kindIs "map" $value }}
[{{ $key }}]
{{- range $elem, $elemVal := $value }}
{{- if kindIs "invalid" $elemVal }}
{{ $elem }} =
{{- else if kindIs "slice" $elemVal }}
{{ $elem }} = {{ toJson $elemVal }}
{{- else if kindIs "string" $elemVal }}
{{ $elem }} = {{ tpl $elemVal $ }}
{{- else }}
{{ $elem }} = {{ $elemVal }}
{{- end }}
{{- end }}
{{- end }}
{{- end }}
{{- range $key, $value := .Values.datasources }}
{{- if not (hasKey $value "secret") }}
{{ $key }}: |
{{- tpl (toYaml $value | nindent 2) $root }}
{{- end }}
{{- end }}
{{- range $key, $value := .Values.notifiers }}
{{- if not (hasKey $value "secret") }}
{{ $key }}: |
{{- toYaml $value | nindent 2 }}
{{- end }}
{{- end }}
{{- range $key, $value := .Values.alerting }}
{{- if (hasKey $value "file") }}
{{ $key }}:
{{- toYaml ( $files.Get $value.file ) | nindent 2 }}
{{- else if (or (hasKey $value "secret") (hasKey $value "secretFile"))}}
{{/* will be stored inside secret generated by "configSecret.yaml"*/}}
{{- else }}
{{ $key }}: |
{{- tpl (toYaml $value | nindent 2) $root }}
{{- end }}
{{- end }}
{{- range $key, $value := .Values.dashboardProviders }}
{{ $key }}: |
{{- toYaml $value | nindent 2 }}
{{- end }}
{{- if .Values.dashboards }}
download_dashboards.sh: |
#!/usr/bin/env sh
set -euf
{{- if .Values.dashboardProviders }}
{{- range $key, $value := .Values.dashboardProviders }}
{{- range $value.providers }}
mkdir -p {{ .options.path }}
{{- end }}
{{- end }}
{{- end }}
{{ $dashboardProviders := .Values.dashboardProviders }}
{{- range $provider, $dashboards := .Values.dashboards }}
{{- range $key, $value := $dashboards }}
{{- if (or (hasKey $value "gnetId") (hasKey $value "url")) }}
curl {{ get $value "curlOptions" | default $.Values.defaultCurlOptions }} \
--connect-timeout 60 \
--max-time 60 \
{{- if not $value.b64content }}
{{- if not $value.acceptHeader }}
-H "Accept: application/json" \
{{- else }}
-H "Accept: {{ $value.acceptHeader }}" \
{{- end }}
{{- if $value.token }}
-H "Authorization: token {{ $value.token }}" \
{{- end }}
{{- if $value.bearerToken }}
-H "Authorization: Bearer {{ $value.bearerToken }}" \
{{- end }}
{{- if $value.basic }}
-H "Authorization: Basic {{ $value.basic }}" \
{{- end }}
{{- if $value.gitlabToken }}
-H "PRIVATE-TOKEN: {{ $value.gitlabToken }}" \
{{- end }}
-H "Content-Type: application/json;charset=UTF-8" \
{{- end }}
{{- $dpPath := "" -}}
{{- range $kd := (index $dashboardProviders "dashboardproviders.yaml").providers }}
{{- if eq $kd.name $provider }}
{{- $dpPath = $kd.options.path }}
{{- end }}
{{- end }}
{{- if $value.url }}
"{{ $value.url }}" \
{{- else }}
"https://grafana.com/api/dashboards/{{ $value.gnetId }}/revisions/{{- if $value.revision -}}{{ $value.revision }}{{- else -}}1{{- end -}}/download" \
{{- end }}
{{- if $value.datasource }}
{{- if kindIs "string" $value.datasource }}
| sed '/-- .* --/! s/"datasource":.*,/"datasource": "{{ $value.datasource }}",/g' \
{{- end }}
{{- if kindIs "slice" $value.datasource }}
{{- range $value.datasource }}
| sed '/-- .* --/! s/${{"{"}}{{ .name }}}/{{ .value }}/g' \
{{- end }}
{{- end }}
{{- end }}
{{- if $value.b64content }}
| base64 -d \
{{- end }}
> "{{- if $dpPath -}}{{ $dpPath }}{{- else -}}/var/lib/grafana/dashboards/{{ $provider }}{{- end -}}/{{ $key }}.json"
{{ end }}
{{- end }}
{{- end }}
{{- end }}
{{- end -}}
{{/*
Generate dashboard json config map data
*/}}
{{- define "grafana.configDashboardProviderData" -}}
provider.yaml: |-
apiVersion: 1
providers:
- name: '{{ .Values.sidecar.dashboards.provider.name }}'
orgId: {{ .Values.sidecar.dashboards.provider.orgid }}
{{- if not .Values.sidecar.dashboards.provider.foldersFromFilesStructure }}
folder: '{{ .Values.sidecar.dashboards.provider.folder }}'
folderUid: '{{ .Values.sidecar.dashboards.provider.folderUid }}'
{{- end }}
type: {{ .Values.sidecar.dashboards.provider.type }}
disableDeletion: {{ .Values.sidecar.dashboards.provider.disableDelete }}
allowUiUpdates: {{ .Values.sidecar.dashboards.provider.allowUiUpdates }}
updateIntervalSeconds: {{ .Values.sidecar.dashboards.provider.updateIntervalSeconds | default 30 }}
options:
foldersFromFilesStructure: {{ .Values.sidecar.dashboards.provider.foldersFromFilesStructure }}
path: {{ .Values.sidecar.dashboards.folder }}{{- with .Values.sidecar.dashboards.defaultFolderName }}/{{ . }}{{- end }}
{{- end -}}
{{- define "grafana.secretsData" -}}
{{- if and (not .Values.env.GF_SECURITY_DISABLE_INITIAL_ADMIN_CREATION) (not .Values.admin.existingSecret) (not .Values.env.GF_SECURITY_ADMIN_PASSWORD__FILE) (not .Values.env.GF_SECURITY_ADMIN_PASSWORD) }}
admin-user: {{ .Values.adminUser | b64enc | quote }}
{{- if .Values.adminPassword }}
admin-password: {{ .Values.adminPassword | b64enc | quote }}
{{- else }}
admin-password: {{ include "grafana.password" . }}
{{- end }}
{{- end }}
{{- if not .Values.ldap.existingSecret }}
ldap-toml: {{ tpl .Values.ldap.config $ | b64enc | quote }}
{{- end }}
{{- end -}}

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,53 @@
{{- if (and (not .Values.useStatefulSet) (or (not .Values.persistence.enabled) (eq .Values.persistence.type "pvc"))) }}
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "grafana.fullname" . }}
namespace: {{ include "grafana.namespace" . }}
labels:
{{- include "grafana.labels" . | nindent 4 }}
{{- with .Values.labels }}
{{- toYaml . | nindent 4 }}
{{- end }}
{{- with .Values.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
{{- if (not .Values.autoscaling.enabled) }}
replicas: {{ .Values.replicas }}
{{- end }}
revisionHistoryLimit: {{ .Values.revisionHistoryLimit }}
selector:
matchLabels:
{{- include "grafana.selectorLabels" . | nindent 6 }}
{{- with .Values.deploymentStrategy }}
strategy:
{{- toYaml . | trim | nindent 4 }}
{{- end }}
template:
metadata:
labels:
{{- include "grafana.labels" . | nindent 8 }}
{{- with .Values.podLabels }}
{{- toYaml . | nindent 8 }}
{{- end }}
annotations:
checksum/config: {{ include "grafana.configData" . | sha256sum }}
{{- if .Values.dashboards }}
checksum/dashboards-json-config: {{ include (print $.Template.BasePath "/dashboards-json-configmap.yaml") . | sha256sum }}
{{- end }}
checksum/sc-dashboard-provider-config: {{ include "grafana.configDashboardProviderData" . | sha256sum }}
{{- if and (or (and (not .Values.admin.existingSecret) (not .Values.env.GF_SECURITY_ADMIN_PASSWORD__FILE) (not .Values.env.GF_SECURITY_ADMIN_PASSWORD)) (and .Values.ldap.enabled (not .Values.ldap.existingSecret))) (not .Values.env.GF_SECURITY_DISABLE_INITIAL_ADMIN_CREATION) }}
checksum/secret: {{ include "grafana.secretsData" . | sha256sum }}
{{- end }}
{{- if .Values.envRenderSecret }}
checksum/secret-env: {{ tpl (toYaml .Values.envRenderSecret) . | sha256sum }}
{{- end }}
kubectl.kubernetes.io/default-container: {{ .Chart.Name }}
{{- with .Values.podAnnotations }}
{{- toYaml . | nindent 8 }}
{{- end }}
spec:
{{- include "grafana.pod" . | nindent 6 }}
{{- end }}

View File

@ -0,0 +1,8 @@
{{ range .Values.extraObjects }}
---
{{- if typeIs "string" . }}
{{ tpl . $ }}
{{ else }}
{{ tpl (. | toYaml) $ }}
{{- end }}
{{ end }}

View File

@ -0,0 +1,51 @@
{{- $sts := list "sts" "StatefulSet" "statefulset" -}}
{{- if .Values.autoscaling.enabled }}
apiVersion: {{ include "grafana.hpa.apiVersion" . }}
kind: HorizontalPodAutoscaler
metadata:
name: {{ include "grafana.fullname" . }}
namespace: {{ include "grafana.namespace" . }}
labels:
app.kubernetes.io/name: {{ include "grafana.name" . }}
helm.sh/chart: {{ include "grafana.chart" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
spec:
scaleTargetRef:
apiVersion: apps/v1
{{- if (or (.Values.useStatefulSet) (and .Values.persistence.enabled (not .Values.persistence.existingClaim) (has .Values.persistence.type $sts)))}}
kind: StatefulSet
{{- else }}
kind: Deployment
{{- end }}
name: {{ include "grafana.fullname" . }}
minReplicas: {{ .Values.autoscaling.minReplicas }}
maxReplicas: {{ .Values.autoscaling.maxReplicas }}
metrics:
{{- if .Values.autoscaling.targetMemory }}
- type: Resource
resource:
name: memory
{{- if eq (include "grafana.hpa.apiVersion" .) "autoscaling/v2beta1" }}
targetAverageUtilization: {{ .Values.autoscaling.targetMemory }}
{{- else }}
target:
type: Utilization
averageUtilization: {{ .Values.autoscaling.targetMemory }}
{{- end }}
{{- end }}
{{- if .Values.autoscaling.targetCPU }}
- type: Resource
resource:
name: cpu
{{- if eq (include "grafana.hpa.apiVersion" .) "autoscaling/v2beta1" }}
targetAverageUtilization: {{ .Values.autoscaling.targetCPU }}
{{- else }}
target:
type: Utilization
averageUtilization: {{ .Values.autoscaling.targetCPU }}
{{- end }}
{{- end }}
{{- if .Values.autoscaling.behavior }}
behavior: {{ toYaml .Values.autoscaling.behavior | nindent 4 }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,198 @@
{{ if .Values.imageRenderer.enabled }}
{{- $root := . -}}
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "grafana.fullname" . }}-image-renderer
namespace: {{ include "grafana.namespace" . }}
labels:
{{- include "grafana.imageRenderer.labels" . | nindent 4 }}
{{- with .Values.imageRenderer.labels }}
{{- toYaml . | nindent 4 }}
{{- end }}
{{- with .Values.imageRenderer.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
{{- if and (not .Values.imageRenderer.autoscaling.enabled) (.Values.imageRenderer.replicas) }}
replicas: {{ .Values.imageRenderer.replicas }}
{{- end }}
revisionHistoryLimit: {{ .Values.imageRenderer.revisionHistoryLimit }}
selector:
matchLabels:
{{- include "grafana.imageRenderer.selectorLabels" . | nindent 6 }}
{{- with .Values.imageRenderer.deploymentStrategy }}
strategy:
{{- toYaml . | trim | nindent 4 }}
{{- end }}
template:
metadata:
labels:
{{- include "grafana.imageRenderer.selectorLabels" . | nindent 8 }}
{{- with .Values.imageRenderer.podLabels }}
{{- toYaml . | nindent 8 }}
{{- end }}
annotations:
checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }}
{{- with .Values.imageRenderer.podAnnotations }}
{{- toYaml . | nindent 8 }}
{{- end }}
spec:
{{- with .Values.imageRenderer.schedulerName }}
schedulerName: "{{ . }}"
{{- end }}
{{- with .Values.imageRenderer.serviceAccountName }}
serviceAccountName: "{{ . }}"
{{- end }}
automountServiceAccountToken: {{ .Values.imageRenderer.automountServiceAccountToken }}
{{- with .Values.imageRenderer.securityContext }}
securityContext:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.imageRenderer.hostAliases }}
hostAliases:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.imageRenderer.priorityClassName }}
priorityClassName: {{ . }}
{{- end }}
{{- if or .Values.imageRenderer.image.pullSecrets .Values.global.imagePullSecrets }}
imagePullSecrets:
{{- include "grafana.imagePullSecrets" (dict "root" $root "imagePullSecrets" .Values.imageRenderer.image.pullSecrets) | nindent 8 }}
{{- end }}
containers:
- name: {{ .Chart.Name }}-image-renderer
{{- $registry := .Values.global.imageRegistry | default .Values.imageRenderer.image.registry -}}
{{- if .Values.imageRenderer.image.sha }}
image: "{{ $registry }}/{{ .Values.imageRenderer.image.repository }}:{{ .Values.imageRenderer.image.tag }}@sha256:{{ .Values.imageRenderer.image.sha }}"
{{- else }}
image: "{{ $registry }}/{{ .Values.imageRenderer.image.repository }}:{{ .Values.imageRenderer.image.tag }}"
{{- end }}
imagePullPolicy: {{ .Values.imageRenderer.image.pullPolicy }}
{{- if .Values.imageRenderer.command }}
command:
{{- range .Values.imageRenderer.command }}
- {{ . }}
{{- end }}
{{- end}}
ports:
- name: {{ .Values.imageRenderer.service.portName }}
containerPort: {{ .Values.imageRenderer.service.targetPort }}
protocol: TCP
livenessProbe:
httpGet:
path: /
port: {{ .Values.imageRenderer.service.portName }}
env:
- name: HTTP_PORT
value: {{ .Values.imageRenderer.service.targetPort | quote }}
{{- if .Values.imageRenderer.serviceMonitor.enabled }}
- name: ENABLE_METRICS
value: "true"
{{- end }}
{{- range $key, $value := .Values.imageRenderer.envValueFrom }}
- name: {{ $key | quote }}
valueFrom:
{{- tpl (toYaml $value) $ | nindent 16 }}
{{- end }}
{{- range $key, $value := .Values.imageRenderer.env }}
- name: {{ $key | quote }}
value: {{ $value | quote }}
{{- end }}
{{- with .Values.imageRenderer.containerSecurityContext }}
securityContext:
{{- toYaml . | nindent 12 }}
{{- end }}
volumeMounts:
- mountPath: /tmp
name: image-renderer-tmpfs
{{- range .Values.imageRenderer.extraConfigmapMounts }}
- name: {{ tpl .name $root }}
mountPath: {{ tpl .mountPath $root }}
subPath: {{ tpl (.subPath | default "") $root }}
readOnly: {{ .readOnly }}
{{- end }}
{{- range .Values.imageRenderer.extraSecretMounts }}
- name: {{ .name }}
mountPath: {{ .mountPath }}
readOnly: {{ .readOnly }}
subPath: {{ .subPath | default "" }}
{{- end }}
{{- range .Values.imageRenderer.extraVolumeMounts }}
- name: {{ .name }}
mountPath: {{ .mountPath }}
subPath: {{ .subPath | default "" }}
readOnly: {{ .readOnly }}
{{- end }}
{{- with .Values.imageRenderer.resources }}
resources:
{{- toYaml . | nindent 12 }}
{{- end }}
{{- with .Values.imageRenderer.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.imageRenderer.affinity }}
affinity:
{{- tpl (toYaml .) $root | nindent 8 }}
{{- end }}
{{- with .Values.imageRenderer.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
volumes:
- name: image-renderer-tmpfs
emptyDir: {}
{{- range .Values.imageRenderer.extraConfigmapMounts }}
- name: {{ tpl .name $root }}
configMap:
name: {{ tpl .configMap $root }}
{{- with .items }}
items:
{{- toYaml . | nindent 14 }}
{{- end }}
{{- end }}
{{- range .Values.imageRenderer.extraSecretMounts }}
{{- if .secretName }}
- name: {{ .name }}
secret:
secretName: {{ .secretName }}
defaultMode: {{ .defaultMode }}
{{- with .items }}
items:
{{- toYaml . | nindent 14 }}
{{- end }}
{{- else if .projected }}
- name: {{ .name }}
projected:
{{- toYaml .projected | nindent 12 }}
{{- else if .csi }}
- name: {{ .name }}
csi:
{{- toYaml .csi | nindent 12 }}
{{- end }}
{{- end }}
{{- range .Values.imageRenderer.extraVolumes }}
- name: {{ .name }}
{{- if .existingClaim }}
persistentVolumeClaim:
claimName: {{ .existingClaim }}
{{- else if .hostPath }}
hostPath:
{{ toYaml .hostPath | nindent 12 }}
{{- else if .csi }}
csi:
{{- toYaml .csi | nindent 12 }}
{{- else if .configMap }}
configMap:
{{- toYaml .configMap | nindent 12 }}
{{- else if .emptyDir }}
emptyDir:
{{- toYaml .emptyDir | nindent 12 }}
{{- else }}
emptyDir: {}
{{- end }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,48 @@
{{- if and .Values.imageRenderer.enabled .Values.imageRenderer.serviceMonitor.enabled }}
---
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: {{ include "grafana.fullname" . }}-image-renderer
{{- if .Values.imageRenderer.serviceMonitor.namespace }}
namespace: {{ tpl .Values.imageRenderer.serviceMonitor.namespace . }}
{{- else }}
namespace: {{ include "grafana.namespace" . }}
{{- end }}
labels:
{{- include "grafana.imageRenderer.labels" . | nindent 4 }}
{{- with .Values.imageRenderer.serviceMonitor.labels }}
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
endpoints:
- port: {{ .Values.imageRenderer.service.portName }}
{{- with .Values.imageRenderer.serviceMonitor.interval }}
interval: {{ . }}
{{- end }}
{{- with .Values.imageRenderer.serviceMonitor.scrapeTimeout }}
scrapeTimeout: {{ . }}
{{- end }}
honorLabels: true
path: {{ .Values.imageRenderer.serviceMonitor.path }}
scheme: {{ .Values.imageRenderer.serviceMonitor.scheme }}
{{- with .Values.imageRenderer.serviceMonitor.tlsConfig }}
tlsConfig:
{{- toYaml . | nindent 6 }}
{{- end }}
{{- with .Values.imageRenderer.serviceMonitor.relabelings }}
relabelings:
{{- toYaml . | nindent 6 }}
{{- end }}
jobLabel: "{{ .Release.Name }}-image-renderer"
selector:
matchLabels:
{{- include "grafana.imageRenderer.selectorLabels" . | nindent 6 }}
namespaceSelector:
matchNames:
- {{ include "grafana.namespace" . }}
{{- with .Values.imageRenderer.serviceMonitor.targetLabels }}
targetLabels:
{{- toYaml . | nindent 4 }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,25 @@
{{- if .Values.podDisruptionBudget }}
apiVersion: {{ include "grafana.podDisruptionBudget.apiVersion" . }}
kind: PodDisruptionBudget
metadata:
name: {{ include "grafana.fullname" . }}
namespace: {{ include "grafana.namespace" . }}
labels:
{{- include "grafana.labels" . | nindent 4 }}
{{- with .Values.labels }}
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
{{- with .Values.podDisruptionBudget.minAvailable }}
minAvailable: {{ . }}
{{- end }}
{{- with .Values.podDisruptionBudget.maxUnavailable }}
maxUnavailable: {{ . }}
{{- end }}
selector:
matchLabels:
{{- include "grafana.selectorLabels" . | nindent 6 }}
{{- with .Values.podDisruptionBudget.unhealthyPodEvictionPolicy }}
unhealthyPodEvictionPolicy: {{ . }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,70 @@
{{- if .Values.service.enabled }}
{{- $root := . }}
apiVersion: v1
kind: Service
metadata:
name: {{ include "grafana.fullname" . }}
namespace: {{ include "grafana.namespace" . }}
labels:
{{- include "grafana.labels" . | nindent 4 }}
{{- with .Values.service.labels }}
{{- toYaml . | nindent 4 }}
{{- end }}
{{- with .Values.service.annotations }}
annotations:
{{- tpl (toYaml . | nindent 4) $root }}
{{- end }}
spec:
{{- if (or (eq .Values.service.type "ClusterIP") (empty .Values.service.type)) }}
type: ClusterIP
{{- with .Values.service.clusterIP }}
clusterIP: {{ . }}
{{- end }}
{{- else if eq .Values.service.type "LoadBalancer" }}
type: LoadBalancer
{{- with .Values.service.loadBalancerIP }}
loadBalancerIP: {{ . }}
{{- end }}
{{- with .Values.service.loadBalancerClass }}
loadBalancerClass: {{ . }}
{{- end }}
{{- with .Values.service.loadBalancerSourceRanges }}
loadBalancerSourceRanges:
{{- toYaml . | nindent 4 }}
{{- end }}
{{- else }}
type: {{ .Values.service.type }}
{{- end }}
{{- if .Values.service.ipFamilyPolicy }}
ipFamilyPolicy: {{ .Values.service.ipFamilyPolicy }}
{{- end }}
{{- if .Values.service.ipFamilies }}
ipFamilies: {{ .Values.service.ipFamilies | toYaml | nindent 2 }}
{{- end }}
{{- with .Values.service.externalIPs }}
externalIPs:
{{- toYaml . | nindent 4 }}
{{- end }}
{{- with .Values.service.externalTrafficPolicy }}
externalTrafficPolicy: {{ . }}
{{- end }}
{{- with .Values.service.sessionAffinity }}
sessionAffinity: {{ . }}
{{- end }}
ports:
- name: {{ .Values.service.portName }}
port: {{ .Values.service.port }}
protocol: TCP
targetPort: {{ .Values.service.targetPort }}
{{- with .Values.service.appProtocol }}
appProtocol: {{ . }}
{{- end }}
{{- if (and (eq .Values.service.type "NodePort") (not (empty .Values.service.nodePort))) }}
nodePort: {{ .Values.service.nodePort }}
{{- end }}
{{- with .Values.extraExposePorts }}
{{- tpl (toYaml . | nindent 4) $root }}
{{- end }}
selector:
{{- include "grafana.selectorLabels" . | nindent 4 }}
{{- end }}

View File

@ -0,0 +1,56 @@
{{- if .Values.serviceMonitor.enabled }}
---
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: {{ include "grafana.fullname" . }}
{{- if .Values.serviceMonitor.namespace }}
namespace: {{ tpl .Values.serviceMonitor.namespace . }}
{{- else }}
namespace: {{ include "grafana.namespace" . }}
{{- end }}
labels:
{{- include "grafana.labels" . | nindent 4 }}
{{- with .Values.serviceMonitor.labels }}
{{- tpl (toYaml . | nindent 4) $ }}
{{- end }}
spec:
endpoints:
- port: {{ .Values.service.portName }}
{{- with .Values.serviceMonitor.interval }}
interval: {{ . }}
{{- end }}
{{- with .Values.serviceMonitor.scrapeTimeout }}
scrapeTimeout: {{ . }}
{{- end }}
honorLabels: true
path: {{ .Values.serviceMonitor.path }}
scheme: {{ .Values.serviceMonitor.scheme }}
{{- with .Values.serviceMonitor.tlsConfig }}
tlsConfig:
{{- toYaml . | nindent 6 }}
{{- end }}
{{- with .Values.serviceMonitor.relabelings }}
relabelings:
{{- toYaml . | nindent 6 }}
{{- end }}
{{- with .Values.serviceMonitor.metricRelabelings }}
metricRelabelings:
{{- toYaml . | nindent 6 }}
{{- end }}
{{- with .Values.serviceMonitor.basicAuth }}
basicAuth:
{{- toYaml . | nindent 6 }}
{{- end }}
jobLabel: "{{ .Release.Name }}"
selector:
matchLabels:
{{- include "grafana.selectorLabels" . | nindent 6 }}
namespaceSelector:
matchNames:
- {{ include "grafana.namespace" . }}
{{- with .Values.serviceMonitor.targetLabels }}
targetLabels:
{{- toYaml . | nindent 4 }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,61 @@
{{- $sts := list "sts" "StatefulSet" "statefulset" -}}
{{- if (or (.Values.useStatefulSet) (and .Values.persistence.enabled (not .Values.persistence.existingClaim) (has .Values.persistence.type $sts)))}}
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: {{ include "grafana.fullname" . }}
namespace: {{ include "grafana.namespace" . }}
labels:
{{- include "grafana.labels" . | nindent 4 }}
{{- with .Values.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
replicas: {{ .Values.replicas }}
selector:
matchLabels:
{{- include "grafana.selectorLabels" . | nindent 6 }}
serviceName: {{ include "grafana.fullname" . }}-headless
template:
metadata:
labels:
{{- include "grafana.labels" . | nindent 8 }}
{{- with .Values.podLabels }}
{{- toYaml . | nindent 8 }}
{{- end }}
annotations:
checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }}
checksum/dashboards-json-config: {{ include (print $.Template.BasePath "/dashboards-json-configmap.yaml") . | sha256sum }}
checksum/sc-dashboard-provider-config: {{ include (print $.Template.BasePath "/configmap-dashboard-provider.yaml") . | sha256sum }}
{{- if and (or (and (not .Values.admin.existingSecret) (not .Values.env.GF_SECURITY_ADMIN_PASSWORD__FILE) (not .Values.env.GF_SECURITY_ADMIN_PASSWORD)) (and .Values.ldap.enabled (not .Values.ldap.existingSecret))) (not .Values.env.GF_SECURITY_DISABLE_INITIAL_ADMIN_CREATION) }}
checksum/secret: {{ include (print $.Template.BasePath "/secret.yaml") . | sha256sum }}
{{- end }}
kubectl.kubernetes.io/default-container: {{ .Chart.Name }}
{{- with .Values.podAnnotations }}
{{- toYaml . | nindent 8 }}
{{- end }}
spec:
{{- include "grafana.pod" . | nindent 6 }}
{{- if .Values.persistence.enabled}}
volumeClaimTemplates:
- apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: storage
spec:
accessModes: {{ .Values.persistence.accessModes }}
storageClassName: {{ .Values.persistence.storageClassName }}
{{- with .Values.persistence.volumeName }}
volumeName: {{ . | quote }}
{{- end }}
resources:
requests:
storage: {{ .Values.persistence.size }}
{{- with .Values.persistence.selectorLabels }}
selector:
matchLabels:
{{- toYaml . | nindent 10 }}
{{- end }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,57 @@
{{- if .Values.testFramework.enabled }}
{{- $root := . }}
apiVersion: v1
kind: Pod
metadata:
name: {{ include "grafana.fullname" . }}-test
labels:
{{- include "grafana.labels" . | nindent 4 }}
annotations:
"helm.sh/hook": {{ .Values.testFramework.hookType | default "test" }}
"helm.sh/hook-delete-policy": "before-hook-creation,hook-succeeded"
namespace: {{ include "grafana.namespace" . }}
spec:
serviceAccountName: {{ include "grafana.serviceAccountNameTest" . }}
{{- with .Values.testFramework.securityContext }}
securityContext:
{{- toYaml . | nindent 4 }}
{{- end }}
{{- if or .Values.image.pullSecrets .Values.global.imagePullSecrets }}
imagePullSecrets:
{{- include "grafana.imagePullSecrets" (dict "root" $root "imagePullSecrets" .Values.image.pullSecrets) | nindent 4 }}
{{- end }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 4 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{- tpl (toYaml .) $root | nindent 4 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 4 }}
{{- end }}
containers:
- name: {{ .Release.Name }}-test
image: "{{ .Values.global.imageRegistry | default .Values.testFramework.image.registry }}/{{ .Values.testFramework.image.repository }}:{{ .Values.testFramework.image.tag }}"
imagePullPolicy: "{{ .Values.testFramework.imagePullPolicy}}"
command: ["/opt/bats/bin/bats", "-t", "/tests/run.sh"]
{{- with .Values.testFramework.containerSecurityContext }}
securityContext:
{{- toYaml . | nindent 8 }}
{{- end }}
volumeMounts:
- mountPath: /tests
name: tests
readOnly: true
{{- with .Values.testFramework.resources }}
resources:
{{- toYaml . | nindent 8 }}
{{- end }}
volumes:
- name: tests
configMap:
name: {{ include "grafana.fullname" . }}-test
restartPolicy: Never
{{- end }}

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,29 @@
annotations:
artifacthub.io/license: Apache-2.0
artifacthub.io/links: |
- name: Chart Source
url: https://github.com/prometheus-community/helm-charts
apiVersion: v2
appVersion: 2.15.0
description: Install kube-state-metrics to generate and expose cluster-level metrics
home: https://github.com/kubernetes/kube-state-metrics/
keywords:
- metric
- monitoring
- prometheus
- kubernetes
maintainers:
- email: tariq.ibrahim@mulesoft.com
name: tariq1890
url: https://github.com/tariq1890
- email: manuel@rueg.eu
name: mrueg
url: https://github.com/mrueg
- email: david@0xdc.me
name: dotdc
url: https://github.com/dotdc
name: kube-state-metrics
sources:
- https://github.com/kubernetes/kube-state-metrics/
type: application
version: 5.33.2

View File

@ -0,0 +1,175 @@
{{/* vim: set filetype=mustache: */}}
{{/*
Expand the name of the chart.
*/}}
{{- define "kube-state-metrics.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "kube-state-metrics.fullname" -}}
{{- if .Values.fullnameOverride -}}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- if contains $name .Release.Name -}}
{{- .Release.Name | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{/*
Create the name of the service account to use
*/}}
{{- define "kube-state-metrics.serviceAccountName" -}}
{{- if .Values.serviceAccount.create -}}
{{ default (include "kube-state-metrics.fullname" .) .Values.serviceAccount.name }}
{{- else -}}
{{ default "default" .Values.serviceAccount.name }}
{{- end -}}
{{- end -}}
{{/*
Allow the release namespace to be overridden for multi-namespace deployments in combined charts
*/}}
{{- define "kube-state-metrics.namespace" -}}
{{- if .Values.namespaceOverride -}}
{{- .Values.namespaceOverride -}}
{{- else -}}
{{- .Release.Namespace -}}
{{- end -}}
{{- end -}}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "kube-state-metrics.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Generate basic labels
*/}}
{{- define "kube-state-metrics.labels" }}
helm.sh/chart: {{ template "kube-state-metrics.chart" . }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/component: metrics
app.kubernetes.io/part-of: {{ template "kube-state-metrics.name" . }}
{{- include "kube-state-metrics.selectorLabels" . }}
{{- if .Chart.AppVersion }}
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
{{- end }}
{{- if .Values.customLabels }}
{{ tpl (toYaml .Values.customLabels) . }}
{{- end }}
{{- if .Values.releaseLabel }}
release: {{ .Release.Name }}
{{- end }}
{{- end }}
{{/*
Selector labels
*/}}
{{- define "kube-state-metrics.selectorLabels" }}
{{- if .Values.selectorOverride }}
{{ toYaml .Values.selectorOverride }}
{{- else }}
app.kubernetes.io/name: {{ include "kube-state-metrics.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- end }}
{{- end }}
{{/* Sets default scrape limits for servicemonitor */}}
{{- define "servicemonitor.scrapeLimits" -}}
{{- with .sampleLimit }}
sampleLimit: {{ . }}
{{- end }}
{{- with .targetLimit }}
targetLimit: {{ . }}
{{- end }}
{{- with .labelLimit }}
labelLimit: {{ . }}
{{- end }}
{{- with .labelNameLengthLimit }}
labelNameLengthLimit: {{ . }}
{{- end }}
{{- with .labelValueLengthLimit }}
labelValueLengthLimit: {{ . }}
{{- end }}
{{- end -}}
{{/* Sets default scrape limits for scrapeconfig */}}
{{- define "scrapeconfig.scrapeLimits" -}}
{{- with .sampleLimit }}
sampleLimit: {{ . }}
{{- end }}
{{- with .targetLimit }}
targetLimit: {{ . }}
{{- end }}
{{- with .labelLimit }}
labelLimit: {{ . }}
{{- end }}
{{- with .labelNameLengthLimit }}
labelNameLengthLimit: {{ . }}
{{- end }}
{{- with .labelValueLengthLimit }}
labelValueLengthLimit: {{ . }}
{{- end }}
{{- end -}}
{{/*
Formats imagePullSecrets. Input is (dict "Values" .Values "imagePullSecrets" .{specific imagePullSecrets})
*/}}
{{- define "kube-state-metrics.imagePullSecrets" -}}
{{- range (concat .Values.global.imagePullSecrets .imagePullSecrets) }}
{{- if eq (typeOf .) "map[string]interface {}" }}
- {{ toYaml . | trim }}
{{- else }}
- name: {{ . }}
{{- end }}
{{- end }}
{{- end -}}
{{/*
The image to use for kube-state-metrics
*/}}
{{- define "kube-state-metrics.image" -}}
{{- if .Values.image.sha }}
{{- if .Values.global.imageRegistry }}
{{- printf "%s/%s:%s@%s" .Values.global.imageRegistry .Values.image.repository (default (printf "v%s" .Chart.AppVersion) .Values.image.tag) .Values.image.sha }}
{{- else }}
{{- printf "%s/%s:%s@%s" .Values.image.registry .Values.image.repository (default (printf "v%s" .Chart.AppVersion) .Values.image.tag) .Values.image.sha }}
{{- end }}
{{- else }}
{{- if .Values.global.imageRegistry }}
{{- printf "%s/%s:%s" .Values.global.imageRegistry .Values.image.repository (default (printf "v%s" .Chart.AppVersion) .Values.image.tag) }}
{{- else }}
{{- printf "%s/%s:%s" .Values.image.registry .Values.image.repository (default (printf "v%s" .Chart.AppVersion) .Values.image.tag) }}
{{- end }}
{{- end }}
{{- end }}
{{/*
The image to use for kubeRBACProxy
*/}}
{{- define "kubeRBACProxy.image" -}}
{{- if .Values.kubeRBACProxy.image.sha }}
{{- if .Values.global.imageRegistry }}
{{- printf "%s/%s:%s@%s" .Values.global.imageRegistry .Values.kubeRBACProxy.image.repository (default (printf "v%s" .Chart.AppVersion) .Values.kubeRBACProxy.image.tag) .Values.kubeRBACProxy.image.sha }}
{{- else }}
{{- printf "%s/%s:%s@%s" .Values.kubeRBACProxy.image.registry .Values.kubeRBACProxy.image.repository (default (printf "v%s" .Chart.AppVersion) .Values.kubeRBACProxy.image.tag) .Values.kubeRBACProxy.image.sha }}
{{- end }}
{{- else }}
{{- if .Values.global.imageRegistry }}
{{- printf "%s/%s:%s" .Values.global.imageRegistry .Values.kubeRBACProxy.image.repository (default (printf "v%s" .Chart.AppVersion) .Values.kubeRBACProxy.image.tag) }}
{{- else }}
{{- printf "%s/%s:%s" .Values.kubeRBACProxy.image.registry .Values.kubeRBACProxy.image.repository (default (printf "v%s" .Chart.AppVersion) .Values.kubeRBACProxy.image.tag) }}
{{- end }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,372 @@
apiVersion: apps/v1
{{- if .Values.autosharding.enabled }}
kind: StatefulSet
{{- else }}
kind: Deployment
{{- end }}
metadata:
name: {{ template "kube-state-metrics.fullname" . }}
namespace: {{ template "kube-state-metrics.namespace" . }}
labels:
{{- include "kube-state-metrics.labels" . | indent 4 }}
{{- if .Values.annotations }}
annotations:
{{ toYaml .Values.annotations | indent 4 }}
{{- end }}
spec:
selector:
matchLabels:
{{- include "kube-state-metrics.selectorLabels" . | indent 6 }}
replicas: {{ .Values.replicas }}
{{- if not .Values.autosharding.enabled }}
strategy:
type: {{ .Values.updateStrategy | default "RollingUpdate" }}
{{- end }}
revisionHistoryLimit: {{ .Values.revisionHistoryLimit }}
{{- if .Values.autosharding.enabled }}
serviceName: {{ template "kube-state-metrics.fullname" . }}
volumeClaimTemplates: []
{{- end }}
template:
metadata:
labels:
{{- include "kube-state-metrics.labels" . | indent 8 }}
{{- with .Values.podLabels }}
{{- toYaml . | nindent 8 }}
{{- end }}
{{- if .Values.podAnnotations }}
annotations:
{{ toYaml .Values.podAnnotations | nindent 8 }}
{{- end }}
spec:
automountServiceAccountToken: {{ .Values.automountServiceAccountToken }}
hostNetwork: {{ .Values.hostNetwork }}
serviceAccountName: {{ template "kube-state-metrics.serviceAccountName" . }}
{{- if .Values.securityContext.enabled }}
securityContext: {{- omit .Values.securityContext "enabled" | toYaml | nindent 8 }}
{{- end }}
{{- if .Values.priorityClassName }}
priorityClassName: {{ .Values.priorityClassName }}
{{- end }}
{{- with .Values.initContainers }}
initContainers:
{{- toYaml . | nindent 6 }}
{{- end }}
{{- if .Values.dnsConfig }}
dnsConfig: {{ toYaml .Values.dnsConfig | nindent 8 }}
{{- end }}
dnsPolicy: {{ .Values.dnsPolicy }}
containers:
{{- $servicePort := ternary 9090 (.Values.service.port | default 8080) .Values.kubeRBACProxy.enabled}}
{{- $telemetryPort := ternary 9091 (.Values.selfMonitor.telemetryPort | default 8081) .Values.kubeRBACProxy.enabled}}
- name: {{ template "kube-state-metrics.name" . }}
{{- if .Values.autosharding.enabled }}
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
{{- if .Values.env }}
{{- toYaml .Values.env | nindent 8 }}
{{- end }}
{{ else }}
{{- if .Values.env }}
env:
{{- toYaml .Values.env | nindent 8 }}
{{- end }}
{{- end }}
args:
{{- if .Values.extraArgs }}
{{- .Values.extraArgs | toYaml | nindent 8 }}
{{- end }}
{{- if .Values.kubeRBACProxy.enabled }}
- --host=127.0.0.1
{{- end }}
- --port={{ $servicePort }}
{{- if .Values.collectors }}
- --resources={{ .Values.collectors | join "," }}
{{- end }}
{{- if .Values.metricLabelsAllowlist }}
- --metric-labels-allowlist={{ .Values.metricLabelsAllowlist | join "," }}
{{- end }}
{{- if .Values.metricAnnotationsAllowList }}
- --metric-annotations-allowlist={{ .Values.metricAnnotationsAllowList | join "," }}
{{- end }}
{{- if .Values.metricAllowlist }}
- --metric-allowlist={{ .Values.metricAllowlist | join "," }}
{{- end }}
{{- if .Values.metricDenylist }}
- --metric-denylist={{ .Values.metricDenylist | join "," }}
{{- end }}
{{- $namespaces := list }}
{{- if .Values.namespaces }}
{{- range $ns := join "," .Values.namespaces | split "," }}
{{- $namespaces = append $namespaces (tpl $ns $) }}
{{- end }}
{{- end }}
{{- if .Values.releaseNamespace }}
{{- $namespaces = append $namespaces ( include "kube-state-metrics.namespace" . ) }}
{{- end }}
{{- if $namespaces }}
- --namespaces={{ $namespaces | mustUniq | join "," }}
{{- end }}
{{- if .Values.namespacesDenylist }}
- --namespaces-denylist={{ tpl (.Values.namespacesDenylist | join ",") $ }}
{{- end }}
{{- if .Values.autosharding.enabled }}
- --pod=$(POD_NAME)
- --pod-namespace=$(POD_NAMESPACE)
{{- end }}
{{- if .Values.kubeconfig.enabled }}
- --kubeconfig=/opt/k8s/.kube/config
{{- end }}
{{- if .Values.kubeRBACProxy.enabled }}
- --telemetry-host=127.0.0.1
- --telemetry-port={{ $telemetryPort }}
{{- else }}
{{- if .Values.selfMonitor.telemetryHost }}
- --telemetry-host={{ .Values.selfMonitor.telemetryHost }}
{{- end }}
{{- if .Values.selfMonitor.telemetryPort }}
- --telemetry-port={{ $telemetryPort }}
{{- end }}
{{- end }}
{{- if .Values.customResourceState.enabled }}
- --custom-resource-state-config-file=/etc/customresourcestate/config.yaml
{{- end }}
{{- if or (.Values.kubeconfig.enabled) (.Values.customResourceState.enabled) (.Values.volumeMounts) }}
volumeMounts:
{{- if .Values.kubeconfig.enabled }}
- name: kubeconfig
mountPath: /opt/k8s/.kube/
readOnly: true
{{- end }}
{{- if .Values.customResourceState.enabled }}
- name: customresourcestate-config
mountPath: /etc/customresourcestate
readOnly: true
{{- end }}
{{- if .Values.volumeMounts }}
{{ toYaml .Values.volumeMounts | indent 8 }}
{{- end }}
{{- end }}
imagePullPolicy: {{ .Values.image.pullPolicy }}
image: {{ include "kube-state-metrics.image" . }}
{{- if eq .Values.kubeRBACProxy.enabled false }}
ports:
- containerPort: {{ .Values.service.port | default 8080}}
name: "http"
{{- if .Values.selfMonitor.enabled }}
- containerPort: {{ $telemetryPort }}
name: "metrics"
{{- end }}
{{- end }}
{{- if not .Values.kubeRBACProxy.enabled }}
{{- if .Values.startupProbe.enabled }}
startupProbe:
failureThreshold: {{ .Values.startupProbe.failureThreshold }}
httpGet:
{{- if .Values.hostNetwork }}
host: 127.0.0.1
{{- end }}
httpHeaders:
{{- range $_, $header := .Values.startupProbe.httpGet.httpHeaders }}
- name: {{ $header.name }}
value: {{ $header.value }}
{{- end }}
path: /healthz
{{- if .Values.kubeRBACProxy.enabled }}
port: {{ .Values.service.port | default 8080 }}
scheme: HTTPS
{{- else }}
port: {{ $servicePort }}
scheme: {{ upper .Values.startupProbe.httpGet.scheme }}
{{- end }}
initialDelaySeconds: {{ .Values.startupProbe.initialDelaySeconds }}
periodSeconds: {{ .Values.startupProbe.periodSeconds }}
successThreshold: {{ .Values.startupProbe.successThreshold }}
timeoutSeconds: {{ .Values.startupProbe.timeoutSeconds }}
{{- end }}
livenessProbe:
failureThreshold: {{ .Values.livenessProbe.failureThreshold }}
httpGet:
{{- if .Values.hostNetwork }}
host: 127.0.0.1
{{- end }}
httpHeaders:
{{- range $_, $header := .Values.livenessProbe.httpGet.httpHeaders }}
- name: {{ $header.name }}
value: {{ $header.value }}
{{- end }}
path: /livez
{{- if .Values.kubeRBACProxy.enabled }}
port: {{ .Values.service.port | default 8080 }}
scheme: HTTPS
{{- else }}
port: {{ $servicePort }}
scheme: {{ upper .Values.livenessProbe.httpGet.scheme }}
{{- end }}
initialDelaySeconds: {{ .Values.livenessProbe.initialDelaySeconds }}
periodSeconds: {{ .Values.livenessProbe.periodSeconds }}
successThreshold: {{ .Values.livenessProbe.successThreshold }}
timeoutSeconds: {{ .Values.livenessProbe.timeoutSeconds }}
readinessProbe:
failureThreshold: {{ .Values.readinessProbe.failureThreshold }}
httpGet:
{{- if .Values.hostNetwork }}
host: 127.0.0.1
{{- end }}
httpHeaders:
{{- range $_, $header := .Values.readinessProbe.httpGet.httpHeaders }}
- name: {{ $header.name }}
value: {{ $header.value }}
{{- end }}
path: /readyz
{{- if .Values.kubeRBACProxy.enabled }}
port: {{ .Values.selfMonitor.telemetryPort | default 8081 }}
scheme: HTTPS
{{- else }}
port: {{ $telemetryPort }}
scheme: {{ upper .Values.readinessProbe.httpGet.scheme }}
{{- end }}
initialDelaySeconds: {{ .Values.readinessProbe.initialDelaySeconds }}
periodSeconds: {{ .Values.readinessProbe.periodSeconds }}
successThreshold: {{ .Values.readinessProbe.successThreshold }}
timeoutSeconds: {{ .Values.readinessProbe.timeoutSeconds }}
{{- end }}
resources:
{{ toYaml .Values.resources | indent 10 }}
{{- if .Values.containerSecurityContext }}
securityContext:
{{ toYaml .Values.containerSecurityContext | indent 10 }}
{{- end }}
{{- if .Values.kubeRBACProxy.enabled }}
- name: kube-rbac-proxy-http
args:
{{- if .Values.kubeRBACProxy.extraArgs }}
{{- .Values.kubeRBACProxy.extraArgs | toYaml | nindent 8 }}
{{- end }}
- --secure-listen-address=:{{ .Values.service.port | default 8080}}
- --upstream=http://127.0.0.1:{{ $servicePort }}/
- --proxy-endpoints-port=8888
- --config-file=/etc/kube-rbac-proxy-config/config-file.yaml
volumeMounts:
- name: kube-rbac-proxy-config
mountPath: /etc/kube-rbac-proxy-config
{{- with .Values.kubeRBACProxy.volumeMounts }}
{{- toYaml . | nindent 10 }}
{{- end }}
imagePullPolicy: {{ .Values.kubeRBACProxy.image.pullPolicy }}
image: {{ include "kubeRBACProxy.image" . }}
ports:
- containerPort: {{ .Values.service.port | default 8080}}
name: "http"
- containerPort: 8888
name: "http-healthz"
readinessProbe:
httpGet:
scheme: HTTPS
port: 8888
path: healthz
initialDelaySeconds: 5
timeoutSeconds: 5
{{- if .Values.kubeRBACProxy.resources }}
resources:
{{ toYaml .Values.kubeRBACProxy.resources | indent 10 }}
{{- end }}
{{- if .Values.kubeRBACProxy.containerSecurityContext }}
securityContext:
{{ toYaml .Values.kubeRBACProxy.containerSecurityContext | indent 10 }}
{{- end }}
{{- if .Values.selfMonitor.enabled }}
- name: kube-rbac-proxy-telemetry
args:
{{- if .Values.kubeRBACProxy.extraArgs }}
{{- .Values.kubeRBACProxy.extraArgs | toYaml | nindent 8 }}
{{- end }}
- --secure-listen-address=:{{ .Values.selfMonitor.telemetryPort | default 8081 }}
- --upstream=http://127.0.0.1:{{ $telemetryPort }}/
- --proxy-endpoints-port=8889
- --config-file=/etc/kube-rbac-proxy-config/config-file.yaml
volumeMounts:
- name: kube-rbac-proxy-config
mountPath: /etc/kube-rbac-proxy-config
{{- with .Values.kubeRBACProxy.volumeMounts }}
{{- toYaml . | nindent 10 }}
{{- end }}
imagePullPolicy: {{ .Values.kubeRBACProxy.image.pullPolicy }}
image: {{ include "kubeRBACProxy.image" . }}
ports:
- containerPort: {{ .Values.selfMonitor.telemetryPort | default 8081 }}
name: "metrics"
- containerPort: 8889
name: "metrics-healthz"
readinessProbe:
httpGet:
scheme: HTTPS
port: 8889
path: healthz
initialDelaySeconds: 5
timeoutSeconds: 5
{{- if .Values.kubeRBACProxy.resources }}
resources:
{{ toYaml .Values.kubeRBACProxy.resources | indent 10 }}
{{- end }}
{{- if .Values.kubeRBACProxy.containerSecurityContext }}
securityContext:
{{ toYaml .Values.kubeRBACProxy.containerSecurityContext | indent 10 }}
{{- end }}
{{- end }}
{{- end }}
{{- with .Values.containers }}
{{- toYaml . | nindent 6 }}
{{- end }}
{{- if or .Values.imagePullSecrets .Values.global.imagePullSecrets }}
imagePullSecrets:
{{- include "kube-state-metrics.imagePullSecrets" (dict "Values" .Values "imagePullSecrets" .Values.imagePullSecrets) | indent 8 }}
{{- end }}
{{- if .Values.affinity }}
affinity:
{{- if kindIs "map" .Values.affinity }}
{{- toYaml .Values.affinity | nindent 8 }}
{{- else }}
{{- tpl .Values.affinity $ | nindent 8 }}
{{- end }}
{{- end }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{ tpl (toYaml .) $ | indent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{ tpl (toYaml .) $ | indent 8 }}
{{- end }}
{{- if .Values.topologySpreadConstraints }}
topologySpreadConstraints:
{{ toYaml .Values.topologySpreadConstraints | indent 8 }}
{{- end }}
{{- if or (.Values.kubeconfig.enabled) (.Values.customResourceState.enabled) (.Values.volumes) (.Values.kubeRBACProxy.enabled) }}
volumes:
{{- if .Values.kubeconfig.enabled}}
- name: kubeconfig
secret:
secretName: {{ template "kube-state-metrics.fullname" . }}-kubeconfig
{{- end }}
{{- if .Values.kubeRBACProxy.enabled}}
- name: kube-rbac-proxy-config
configMap:
name: {{ template "kube-state-metrics.fullname" . }}-rbac-config
{{- end }}
{{- if .Values.customResourceState.enabled}}
- name: customresourcestate-config
configMap:
name: {{ template "kube-state-metrics.fullname" . }}-customresourcestate-config
{{- end }}
{{- if .Values.volumes }}
{{ toYaml .Values.volumes | indent 8 }}
{{- end }}
{{- end }}

Some files were not shown because too many files have changed in this diff Show More