2025-06-04 21:56:20 -04:00

97 lines
5.4 KiB
Plaintext

CHART NAME: {{ .Chart.Name }}
CHART VERSION: {{ .Chart.Version }}
APP VERSION: {{ .Chart.AppVersion }}
{{- $releaseNamespace := include "common.names.namespace" . }}
{{- $clusterDomain := .Values.clusterDomain }}
{{- $postgresqlSecretName := include "postgresql-ha.postgresqlSecretName" . }}
{{- $postgresqlUsername := include "postgresql-ha.postgresqlUsername" . }}
{{- $postgresqlDatabase := include "postgresql-ha.postgresqlDatabase" . }}
{{- $postgresqlCredentials := printf "-U %s%s" $postgresqlUsername (ternary "" (printf " -d %s" $postgresqlDatabase) (empty $postgresqlDatabase)) }}
{{- $pgpoolSvcName := include "postgresql-ha.pgpool" . }}
Did you know there are enterprise versions of the Bitnami catalog? For enhanced secure software supply chain features, unlimited pulls from Docker, LTS support, or application customization, see Bitnami Premium or Tanzu Application Catalog. See https://www.arrow.com/globalecs/na/vendors/bitnami for more information.
** Please be patient while the chart is being deployed **
{{- if .Values.diagnosticMode.enabled }}
The chart has been deployed in diagnostic mode. All probes have been disabled and the command has been overwritten with:
command: {{- include "common.tplvalues.render" (dict "value" .Values.diagnosticMode.command "context" .) | nindent 4 }}
args: {{- include "common.tplvalues.render" (dict "value" .Values.diagnosticMode.args "context" .) | nindent 4 }}
Get the list of pods by executing:
kubectl get pods --namespace {{ $releaseNamespace }} -l app.kubernetes.io/instance={{ .Release.Name }}
Access the pod you want to debug by executing
kubectl exec --namespace {{ $releaseNamespace }} -ti <NAME OF THE POD> -- bash
In order to replicate the container startup scripts, check the /opt/bitnami/scripts folder.
Default PostgreSQL startup command
/opt/bitnami/scripts/postgresql-repmgr/entrypoint.sh /opt/bitnami/scripts/postgresql-repmgr/run.sh
Default Pgpool-II startup command
/opt/bitnami/scripts/pgpool/entrypoint.sh /opt/bitnami/scripts/pgpool/run.sh
{{- else }}
PostgreSQL can be accessed through Pgpool-II via port {{ .Values.service.ports.postgresql }} on the following DNS name from within your cluster:
{{ $pgpoolSvcName }}.{{ $releaseNamespace }}.svc.{{ $clusterDomain }}
Pgpool-II acts as a load balancer for PostgreSQL and forward read/write connections to the primary node while read-only connections are forwarded to standby nodes.
To get the password for {{ $postgresqlUsername | quote }} user run:
{{ include "common.utils.secret.getvalue" (dict "secret" $postgresqlSecretName "field" "password" "context" $) }}
To connect to your database run the following command:
kubectl run {{ include "common.names.fullname" . }}-client --rm --tty -i --restart='Never' --namespace {{ $releaseNamespace }} \
--image {{ include "postgresql-ha.postgresql.image" . }} --env="PGPASSWORD=$PASSWORD" {{ if and (.Values.pgpool.networkPolicy.enabled) (not .Values.pgpool.networkPolicy.allowExternal) }}--labels="{{ include "common.names.fullname" . }}-client=true" {{- end }} \
--command -- psql -h {{ $pgpoolSvcName }} -p {{ .Values.service.ports.postgresql }} {{ $postgresqlCredentials }}
{{- if and .Values.pgpool.networkPolicy.enabled (not .Values.pgpool.networkPolicy.allowExternal) }}
Note: Since NetworkPolicy is enabled, only pods with label "{{ $pgpoolSvcName }}-client=true" will be able to connect Pgpool-II.
{{- end }}
To connect to your database from outside the cluster execute the following commands:
{{- if contains "NodePort" .Values.service.type }}
export NODE_IP=$(kubectl get nodes --namespace {{ $releaseNamespace }} -o jsonpath="{.items[0].status.addresses[0].address}")
export NODE_PORT=$(kubectl get --namespace {{ $releaseNamespace }} -o jsonpath="{.spec.ports[0].nodePort}" svc {{ $pgpoolSvcName }})
PGPASSWORD="$PASSWORD" psql -h $NODE_IP -p $NODE_PORT {{ $postgresqlCredentials }}
{{- else if contains "LoadBalancer" .Values.service.type }}
NOTE: It may take a few minutes for the LoadBalancer IP to be available.
Watch the status with: 'kubectl get svc --namespace {{ $releaseNamespace }} -w {{ $pgpoolSvcName }}
export SERVICE_IP=$(kubectl get svc --namespace {{ $releaseNamespace }} {{ $pgpoolSvcName }} --template "{{ "{{ range (index .status.loadBalancer.ingress 0) }}{{ . }}{{ end }}" }}")
PGPASSWORD="$PASSWORD" psql -h $SERVICE_IP -p {{ .Values.service.ports.postgresql }} {{ $postgresqlCredentials }}
{{- else if contains "ClusterIP" .Values.service.type }}
kubectl port-forward --namespace {{ $releaseNamespace }} svc/{{ $pgpoolSvcName }} {{ .Values.service.ports.postgresql }}:{{ .Values.service.ports.postgresql }} &
psql -h 127.0.0.1 -p {{ .Values.service.ports.postgresql }} {{ $postgresqlCredentials }}
{{- end }}
{{- end }}
{{- include "postgresql-ha.validateValues" . }}
{{- $resourcesSections := list "postgresql" "pgpool" }}
{{- if .Values.witness.create }}
{{- $resourcesSections = append $resourcesSections "witness" }}
{{- end }}
{{- include "postgresql-ha.checkRollingTags" . }}
{{- include "common.warnings.resources" (dict "sections" $resourcesSections "context" .) }}
{{- include "common.warnings.modifiedImages" (dict "images" (list .Values.postgresql.image .Values.pgpool.image .Values.metrics.image .Values.volumePermissions.image) "context" .) }}
{{- include "common.errors.insecureImages" (dict "images" (list .Values.postgresql.image .Values.pgpool.image .Values.metrics.image .Values.volumePermissions.image) "context" .) }}