Setup Keycloak Bitnami Chart on Rancher Kubernetes - JDBC connection failed

Hey guys,
I’m trying to set up Keycloak on my rancher kubernetes test environment, but the Database JDBC Connection keeps failing.
Here’s my configuration:

apiVersion: v1
kind: Pod
metadata:
  annotations:
    checksum/configmap-env-vars: 9171c7da26dc7257a2aae80136f53b9f7a841fd8e0d9fdb86661c1d2b2c809ee
    checksum/secrets: c6c278f69e59f814abf3368d9fbf7bbefaf14766e8a997fda6badd5cd197aa37
  creationTimestamp: "2022-06-28T13:04:50Z"
  generateName: keycloak-bitnami-
  labels:
    app.kubernetes.io/component: keycloak
    app.kubernetes.io/instance: keycloak-bitnami
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: keycloak
    controller-revision-hash: keycloak-bitnami-54c98ff9c8
    helm.sh/chart: keycloak-9.3.2
    statefulset.kubernetes.io/pod-name: keycloak-bitnami-0
  managedFields:
  - apiVersion: v1
    fieldsType: FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          .: {}
          f:checksum/configmap-env-vars: {}
          f:checksum/secrets: {}
        f:generateName: {}
        f:labels:
          .: {}
          f:app.kubernetes.io/component: {}
          f:app.kubernetes.io/instance: {}
          f:app.kubernetes.io/managed-by: {}
          f:app.kubernetes.io/name: {}
          f:controller-revision-hash: {}
          f:helm.sh/chart: {}
          f:statefulset.kubernetes.io/pod-name: {}
        f:ownerReferences:
          .: {}
          k:{"uid":"33c23b74-cab2-4290-954b-8271c6333b32"}: {}
      f:spec:
        f:affinity:
          .: {}
          f:podAntiAffinity:
            .: {}
            f:preferredDuringSchedulingIgnoredDuringExecution: {}
        f:containers:
          k:{"name":"keycloak"}:
            .: {}
            f:env:
              .: {}
              k:{"name":"BITNAMI_DEBUG"}:
                .: {}
                f:name: {}
                f:value: {}
              k:{"name":"KEYCLOAK_ADMIN_PASSWORD"}:
                .: {}
                f:name: {}
                f:valueFrom:
                  .: {}
                  f:secretKeyRef: {}
              k:{"name":"KEYCLOAK_DATABASE_PASSWORD"}:
                .: {}
                f:name: {}
                f:valueFrom:
                  .: {}
                  f:secretKeyRef: {}
              k:{"name":"KEYCLOAK_MANAGEMENT_PASSWORD"}:
                .: {}
                f:name: {}
                f:valueFrom:
                  .: {}
                  f:secretKeyRef: {}
              k:{"name":"KUBERNETES_NAMESPACE"}:
                .: {}
                f:name: {}
                f:valueFrom:
                  .: {}
                  f:fieldRef: {}
            f:envFrom: {}
            f:image: {}
            f:imagePullPolicy: {}
            f:livenessProbe:
              .: {}
              f:failureThreshold: {}
              f:httpGet:
                .: {}
                f:path: {}
                f:port: {}
                f:scheme: {}
              f:initialDelaySeconds: {}
              f:periodSeconds: {}
              f:successThreshold: {}
              f:timeoutSeconds: {}
            f:name: {}
            f:ports:
              .: {}
              k:{"containerPort":8080,"protocol":"TCP"}:
                .: {}
                f:containerPort: {}
                f:name: {}
                f:protocol: {}
              k:{"containerPort":8443,"protocol":"TCP"}:
                .: {}
                f:containerPort: {}
                f:name: {}
                f:protocol: {}
              k:{"containerPort":9990,"protocol":"TCP"}:
                .: {}
                f:containerPort: {}
                f:name: {}
                f:protocol: {}
            f:readinessProbe:
              .: {}
              f:failureThreshold: {}
              f:httpGet:
                .: {}
                f:path: {}
                f:port: {}
                f:scheme: {}
              f:initialDelaySeconds: {}
              f:periodSeconds: {}
              f:successThreshold: {}
              f:timeoutSeconds: {}
            f:resources: {}
            f:securityContext:
              .: {}
              f:runAsNonRoot: {}
              f:runAsUser: {}
            f:terminationMessagePath: {}
            f:terminationMessagePolicy: {}
        f:dnsPolicy: {}
        f:enableServiceLinks: {}
        f:hostname: {}
        f:restartPolicy: {}
        f:schedulerName: {}
        f:securityContext:
          .: {}
          f:fsGroup: {}
        f:serviceAccount: {}
        f:serviceAccountName: {}
        f:subdomain: {}
        f:terminationGracePeriodSeconds: {}
    manager: kube-controller-manager
    operation: Update
    time: "2022-06-28T13:04:50Z"
  - apiVersion: v1
    fieldsType: FieldsV1
    fieldsV1:
      f:status:
        f:conditions:
          k:{"type":"ContainersReady"}:
            .: {}
            f:lastProbeTime: {}
            f:lastTransitionTime: {}
            f:message: {}
            f:reason: {}
            f:status: {}
            f:type: {}
          k:{"type":"Initialized"}:
            .: {}
            f:lastProbeTime: {}
            f:lastTransitionTime: {}
            f:status: {}
            f:type: {}
          k:{"type":"Ready"}:
            .: {}
            f:lastProbeTime: {}
            f:lastTransitionTime: {}
            f:message: {}
            f:reason: {}
            f:status: {}
            f:type: {}
        f:containerStatuses: {}
        f:hostIP: {}
        f:phase: {}
        f:podIP: {}
        f:podIPs:
          .: {}
          k:{"ip":"10.42.4.86"}:
            .: {}
            f:ip: {}
        f:startTime: {}
    manager: kubelet
    operation: Update
    subresource: status
    time: "2022-06-28T13:04:52Z"
  name: keycloak-bitnami-0
  namespace: keycloak
  ownerReferences:
  - apiVersion: apps/v1
    blockOwnerDeletion: true
    controller: true
    kind: StatefulSet
    name: keycloak-bitnami
    uid: 33c23b74-cab2-4290-954b-8271c6333b32
  resourceVersion: "1986301"
  uid: a328e91a-2c93-4425-8841-c211944fc356
spec:
  affinity:
    podAntiAffinity:
      preferredDuringSchedulingIgnoredDuringExecution:
      - podAffinityTerm:
          labelSelector:
            matchLabels:
              app.kubernetes.io/instance: keycloak-bitnami
              app.kubernetes.io/name: keycloak
          namespaces:
          - keycloak
          topologyKey: kubernetes.io/hostname
        weight: 1
  containers:
  - env:
    - name: KUBERNETES_NAMESPACE
      valueFrom:
        fieldRef:
          apiVersion: v1
          fieldPath: metadata.namespace
    - name: BITNAMI_DEBUG
      value: "false"
    - name: KEYCLOAK_ADMIN_PASSWORD
      valueFrom:
        secretKeyRef:
          key: admin-password
          name: keycloak-bitnami
    - name: KEYCLOAK_MANAGEMENT_PASSWORD
      valueFrom:
        secretKeyRef:
          key: management-password
          name: keycloak-bitnami
    - name: KEYCLOAK_DATABASE_PASSWORD
      valueFrom:
        secretKeyRef:
          key: password
          name: keycloak-bitnami-postgresql
    envFrom:
    - configMapRef:
        name: keycloak-bitnami-env-vars
    image: docker.io/bitnami/keycloak:18.0.2-debian-11-r0
    imagePullPolicy: IfNotPresent
    livenessProbe:
      failureThreshold: 3
      httpGet:
        path: /
        port: http
        scheme: HTTP
      initialDelaySeconds: 300
      periodSeconds: 1
      successThreshold: 1
      timeoutSeconds: 5
    name: keycloak
    ports:
    - containerPort: 8080
      name: http
      protocol: TCP
    - containerPort: 8443
      name: https
      protocol: TCP
    - containerPort: 9990
      name: http-management
      protocol: TCP
    readinessProbe:
      failureThreshold: 3
      httpGet:
        path: /realms/master
        port: http
        scheme: HTTP
      initialDelaySeconds: 30
      periodSeconds: 10
      successThreshold: 1
      timeoutSeconds: 1
    resources: {}
    securityContext:
      runAsNonRoot: true
      runAsUser: 1001
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: kube-api-access-89rqp
      readOnly: true
  dnsPolicy: ClusterFirst
  enableServiceLinks: true
  hostname: keycloak-bitnami-0
  nodeName: worker0.pascals-lab.cool
  preemptionPolicy: PreemptLowerPriority
  priority: 0
  restartPolicy: Always
  schedulerName: default-scheduler
  securityContext:
    fsGroup: 1001
  serviceAccount: keycloak-bitnami
  serviceAccountName: keycloak-bitnami
  subdomain: keycloak-bitnami-headless
  terminationGracePeriodSeconds: 30
  tolerations:
  - effect: NoExecute
    key: node.kubernetes.io/not-ready
    operator: Exists
    tolerationSeconds: 300
  - effect: NoExecute
    key: node.kubernetes.io/unreachable
    operator: Exists
    tolerationSeconds: 300
  volumes:
  - name: kube-api-access-89rqp
    projected:
      defaultMode: 420
      sources:
      - serviceAccountToken:
          expirationSeconds: 3607
          path: token
      - configMap:
          items:
          - key: ca.crt
            path: ca.crt
          name: kube-root-ca.crt
      - downwardAPI:
          items:
          - fieldRef:
              apiVersion: v1
              fieldPath: metadata.namespace
            path: namespace
status:
  conditions:
  - lastProbeTime: null
    lastTransitionTime: "2022-06-28T13:04:50Z"
    status: "True"
    type: Initialized
  - lastProbeTime: null
    lastTransitionTime: "2022-06-28T13:04:50Z"
    message: 'containers with unready status: [keycloak]'
    reason: ContainersNotReady
    status: "False"
    type: Ready
  - lastProbeTime: null
    lastTransitionTime: "2022-06-28T13:04:50Z"
    message: 'containers with unready status: [keycloak]'
    reason: ContainersNotReady
    status: "False"
    type: ContainersReady
  - lastProbeTime: null
    lastTransitionTime: "2022-06-28T13:04:50Z"
    status: "True"
    type: PodScheduled
  containerStatuses:
  - containerID: docker://1f6c0143020f45091651cb8f80e3596e96ca2c70f89c73630f053c0be9dca1c4
    image: bitnami/keycloak:18.0.2-debian-11-r0
    imageID: docker-pullable://bitnami/keycloak@sha256:2d4942583df301f89f64c0d0aa8da360c056ea9215eda17de0ef3d37fbc34217
    lastState: {}
    name: keycloak
    ready: false
    restartCount: 0
    started: true
    state:
      running:
        startedAt: "2022-06-28T13:04:51Z"
  hostIP: 192.168.178.104
  phase: Running
  podIP: 10.42.4.86
  podIPs:
  - ip: 10.42.4.86
  qosClass: BestEffort
  startTime: "2022-06-28T13:04:50Z"

I hope my error can be identified easily. Thanks in advance,
Pascal

Edit: this is the real chart

affinity: {}
args: []
auth:
  adminPassword: admin1234
  adminUser: admin
  existingSecret: ''
  existingSecretPerPassword: {}
  managementPassword: manager1234
  managementUser: manager
  tls:
    autoGenerated: false
    enabled: false
    existingSecret: ''
    jksSecret: ''
    keystoreFilename: ''
    keystorePassword: ''
    resources:
      limits: {}
      requests: {}
    truststoreFilename: ''
    truststorePassword: ''
    usePem: false
autoscaling:
  enabled: false
  maxReplicas: 11
  minReplicas: 1
  targetCPU: ''
  targetMemory: ''
cache:
  enabled: false
clusterDomain: cluster.local
command: []
commonAnnotations: {}
commonLabels: {}
configuration: ''
containerPorts:
  http: 8080
  https: 8443
  management: 9990
containerSecurityContext:
  enabled: true
  runAsNonRoot: true
  runAsUser: 1001
customLivenessProbe: {}
customReadinessProbe: {}
customStartupProbe: {}
diagnosticMode:
  args:
    - infinity
  command:
    - sleep
  enabled: false
existingConfigmap: ''
externalDatabase:
  database: bitnami_keycloak
  existingSecret: ''
  existingSecretPasswordKey: ''
  host: keycloak-bitnami-postgresql
  password: admin1234
  port: 5432
  user: bn_keycloak
extraDeploy: []
extraEnvVars: []
extraEnvVarsCM: ''
extraEnvVarsSecret: ''
extraStartupArgs: ''
extraVolumeMounts: []
extraVolumes: []
fullnameOverride: ''
global:
  imagePullSecrets: []
  imageRegistry: ''
  storageClass: ''
hostAliases: []
image:
  debug: false
  pullPolicy: IfNotPresent
  pullSecrets: []
  registry: docker.io
  repository: bitnami/keycloak
  tag: 18.0.2-debian-11-r0
ingress:
  annotations: {}
  apiVersion: ''
  enabled: false
  extraHosts: []
  extraPaths: []
  extraRules: []
  extraTls: []
  hostname: keycloak.local
  ingressClassName: ''
  path: /
  pathType: ImplementationSpecific
  secrets: []
  selfSigned: false
  servicePort: http
  tls: false
initContainers: []
initdbScripts: {}
initdbScriptsConfigMap: ''
keycloakConfigCli:
  annotations:
    helm.sh/hook: post-install,post-upgrade,post-rollback
    helm.sh/hook-delete-policy: hook-succeeded,before-hook-creation
    helm.sh/hook-weight: '5'
  args: []
  backoffLimit: 1
  command: []
  configuration: {}
  containerSecurityContext:
    enabled: true
    runAsNonRoot: true
    runAsUser: 1001
  enabled: false
  existingConfigmap: ''
  extraEnvVars: []
  extraEnvVarsCM: ''
  extraEnvVarsSecret: ''
  extraVolumeMounts: []
  extraVolumes: []
  hostAliases: []
  image:
    pullPolicy: IfNotPresent
    pullSecrets: []
    registry: docker.io
    repository: bitnami/keycloak-config-cli
    tag: 5.2.1-debian-11-r1
  podAnnotations: {}
  podLabels: {}
  podSecurityContext:
    enabled: true
    fsGroup: 1001
  resources:
    limits: {}
    requests: {}
kubeVersion: ''
lifecycleHooks: {}
livenessProbe:
  enabled: true
  failureThreshold: 3
  initialDelaySeconds: 300
  periodSeconds: 1
  successThreshold: 1
  timeoutSeconds: 5
logging:
  output: default
metrics:
  enabled: false
  service:
    annotations:
      prometheus.io/port: '{{ .Values.metrics.service.ports.http }}'
      prometheus.io/scrape: 'true'
    ports:
      http: 9990
  serviceMonitor:
    enabled: false
    honorLabels: false
    interval: 30s
    jobLabel: ''
    labels: {}
    metricRelabelings: []
    namespace: ''
    relabelings: []
    scrapeTimeout: ''
    selector: {}
nameOverride: ''
networkPolicy:
  additionalRules: {}
  allowExternal: true
  enabled: false
nodeAffinityPreset:
  key: ''
  type: ''
  values: []
nodeSelector: {}
pdb:
  create: false
  maxUnavailable: ''
  minAvailable: 1
podAffinityPreset: ''
podAnnotations: {}
podAntiAffinityPreset: soft
podLabels: {}
podManagementPolicy: Parallel
podSecurityContext:
  enabled: true
  fsGroup: 1001
postgresql:
  architecture: standalone
  auth:
    database: bitnami_keycloak
    existingSecret: ''
    password: admin1234
    username: bn_keycloak
  enabled: true
priorityClassName: ''
proxy: passthrough
rbac:
  create: false
  rules: []
readinessProbe:
  enabled: true
  failureThreshold: 3
  initialDelaySeconds: 30
  periodSeconds: 10
  successThreshold: 1
  timeoutSeconds: 1
replicaCount: 1
resources:
  limits: {}
  requests: {}
schedulerName: ''
service:
  annotations: {}
  clusterIP: ''
  externalTrafficPolicy: Cluster
  extraPorts: []
  loadBalancerIP: ''
  loadBalancerSourceRanges: []
  nodePorts:
    http: ''
    https: ''
  ports:
    http: 80
    https: 443
  sessionAffinity: None
  sessionAffinityConfig: {}
  type: ClusterIP
serviceAccount:
  annotations: {}
  automountServiceAccountToken: true
  create: true
  name: ''
sidecars: []
startupProbe:
  enabled: false
  failureThreshold: 60
  initialDelaySeconds: 30
  periodSeconds: 5
  successThreshold: 1
  timeoutSeconds: 1
terminationGracePeriodSeconds: ''
tolerations: []
topologySpreadConstraints: []
updateStrategy:
  rollingUpdate: {}
  type: RollingUpdate

Edit: Postgres Pod Logs:

postgresql 07:54:29.64
postgresql 07:54:29.65 Welcome to the Bitnami postgresql container
postgresql 07:54:29.65 Subscribe to project updates by watching https://github.com/bitnami/bitnami-docker-postgresql
postgresql 07:54:29.65 Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-postgresql/issues
postgresql 07:54:29.65
postgresql 07:54:29.67 INFO ==> ** Starting PostgreSQL setup **
postgresql 07:54:29.70 INFO ==> Validating settings in POSTGRESQL_* env vars..
postgresql 07:54:29.70 INFO ==> Loading custom pre-init scripts...
postgresql 07:54:29.71 INFO ==> Initializing PostgreSQL database...
postgresql 07:54:29.73 INFO ==> pg_hba.conf file not detected. Generating it...
postgresql 07:54:29.73 INFO ==> Generating local authentication configuration
postgresql 07:54:29.75 INFO ==> Deploying PostgreSQL with persisted data...
postgresql 07:54:29.76 INFO ==> Configuring replication parameters
postgresql 07:54:29.79 INFO ==> Configuring fsync
postgresql 07:54:29.80 INFO ==> Configuring synchronous_replication
postgresql 07:54:29.84 INFO ==> Loading custom scripts...
postgresql 07:54:29.84 INFO ==> Enabling remote connections
postgresql 07:54:29.85 INFO ==> ** PostgreSQL setup finished! **
postgresql 07:54:29.87 INFO ==> ** Starting PostgreSQL **
2022-06-29 07:54:29.946 GMT [1] LOG: pgaudit extension initialized
2022-06-29 07:54:29.964 GMT [1] LOG: starting PostgreSQL 14.4 on x86_64-pc-linux-gnu, compiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit
2022-06-29 07:54:29.965 GMT [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
2022-06-29 07:54:29.965 GMT [1] LOG: listening on IPv6 address "::", port 5432
2022-06-29 07:54:29.999 GMT [1] LOG: listening on Unix socket "/tmp/.s.PGSQL.5432"
2022-06-29 07:54:30.042 GMT [92] LOG: database system was shut down at 2022-06-29 07:47:43 GMT
2022-06-29 07:54:30.080 GMT [1] LOG: database system is ready to accept connections
2022-06-29 07:55:05.668 GMT [122] FATAL: password authentication failed for user "bn_keycloak"
2022-06-29 07:55:05.668 GMT [122] DETAIL: Connection matched pg_hba.conf line 1: "host all all 0.0.0.0/0 md5"
2022-06-29 07:55:07.109 GMT [123] FATAL: password authentication failed for user "bn_keycloak"
2022-06-29 07:55:07.109 GMT [123] DETAIL: Connection matched pg_hba.conf line 1: "host all all 0.0.0.0/0 md5"
2022-06-29 07:55:35.619 GMT [168] FATAL: password authentication failed for user "bn_keycloak"
2022-06-29 07:55:35.619 GMT [168] DETAIL: Connection matched pg_hba.conf line 1: "host all all 0.0.0.0/0 md5"
2022-06-29 07:55:37.040 GMT [169] FATAL: password authentication failed for user "bn_keycloak"
2022-06-29 07:55:37.040 GMT [169] DETAIL: Connection matched pg_hba.conf line 1: "host all all 0.0.0.0/0 md5"
2022-06-29 07:56:22.496 GMT [243] FATAL: password authentication failed for user "bn_keycloak"
2022-06-29 07:56:22.496 GMT [243] DETAIL: Connection matched pg_hba.conf line 1: "host all all 0.0.0.0/0 md5"
2022-06-29 07:56:23.849 GMT [244] FATAL: password authentication failed for user "bn_keycloak"
2022-06-29 07:56:23.849 GMT [244] DETAIL: Connection matched pg_hba.conf line 1: "host all all 0.0.0.0/0 md5"

Keycloak Logs:

keycloak 07:55:08.58
keycloak 07:55:08.58 Welcome to the Bitnami keycloak container
keycloak 07:55:08.58 Subscribe to project updates by watching https://github.com/bitnami/bitnami-docker-keycloak
keycloak 07:55:08.58 Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-keycloak/issues
keycloak 07:55:08.58
keycloak 07:55:08.58 INFO ==> ** Starting keycloak setup **
keycloak 07:55:08.59 INFO ==> Validating settings in KEYCLOAK_* env vars...
keycloak 07:55:08.60 INFO ==> Trying to connect to PostgreSQL server keycloak-bitnami-postgresql...
keycloak 07:55:08.61 INFO ==> Found PostgreSQL server listening at keycloak-bitnami-postgresql:5432
keycloak 07:55:08.61 INFO ==> Configuring database settings
keycloak 07:55:22.81 INFO ==> Enabling statistics
keycloak 07:55:22.82 INFO ==> Configuring http settings
keycloak 07:55:22.84 INFO ==> Configuring hostname settings
keycloak 07:55:22.84 INFO ==> Configuring cache count
keycloak 07:55:22.85 INFO ==> Configuring log level
keycloak 07:55:22.86 INFO ==> Configuring proxy
keycloak 07:55:22.87 INFO ==> ** keycloak setup finished! **
keycloak 07:55:22.89 INFO ==> ** Starting keycloak **
Updating the configuration and installing your custom providers, if any. Please wait.
2022-06-29 07:55:28,307 WARN [org.keycloak.services] (build-13) KC-SERVICES0047: metrics (org.jboss.aerogear.keycloak.metrics.MetricsEndpointFactory) is implementing the internal SPI realm-restapi-extension. This SPI is internal and may change without notice
2022-06-29 07:55:28,755 WARN [org.keycloak.services] (build-13) KC-SERVICES0047: metrics-listener (org.jboss.aerogear.keycloak.metrics.MetricsEventListenerFactory) is implementing the internal SPI eventsListener. This SPI is internal and may change without notice
2022-06-29 07:55:32,759 INFO [io.quarkus.deployment.QuarkusAugmentor] (main) Quarkus augmentation completed in 7243ms
2022-06-29 07:55:35,345 INFO [org.keycloak.quarkus.runtime.hostname.DefaultHostnameProvider] (main) Hostname settings: FrontEnd: <request>, Strict HTTPS: false, Path: <request>, Strict BackChannel: false, Admin: <request>, Port: -1, Proxied: true
2022-06-29 07:55:35,627 WARN [io.agroal.pool] (agroal-11) Datasource '<default>': FATAL: password authentication failed for user "bn_keycloak"
2022-06-29 07:55:35,628 WARN [org.hibernate.engine.jdbc.env.internal.JdbcEnvironmentInitiator] (JPA Startup Thread: keycloak-default) HHH000342: Could not obtain connection to query metadata: org.postgresql.util.PSQLException: FATAL: password authentication failed for user "bn_keycloak"
at org.postgresql.core.v3.ConnectionFactoryImpl.doAuthentication(ConnectionFactoryImpl.java:646)
at org.postgresql.core.v3.ConnectionFactoryImpl.tryConnect(ConnectionFactoryImpl.java:180)
at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:235)
at org.postgresql.core.ConnectionFactory.openConnection(ConnectionFactory.java:49)
at org.postgresql.jdbc.PgConnection.<init>(PgConnection.java:223)
at org.postgresql.Driver.makeConnection(Driver.java:400)
at org.postgresql.Driver.connect(Driver.java:259)
at java.sql/java.sql.DriverManager.getConnection(DriverManager.java:677)
at java.sql/java.sql.DriverManager.getConnection(DriverManager.java:228)
at org.postgresql.ds.common.BaseDataSource.getConnection(BaseDataSource.java:103)
at org.postgresql.xa.PGXADataSource.getXAConnection(PGXADataSource.java:49)
at org.postgresql.xa.PGXADataSource.getXAConnection(PGXADataSource.java:35)
at io.agroal.pool.ConnectionFactory.createConnection(ConnectionFactory.java:216)
at io.agroal.pool.ConnectionPool$CreateConnectionTask.call(ConnectionPool.java:513)
at io.agroal.pool.ConnectionPool$CreateConnectionTask.call(ConnectionPool.java:494)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at io.agroal.pool.util.PriorityScheduledExecutor.beforeExecute(PriorityScheduledExecutor.java:75)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1126)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)
2022-06-29 07:55:36,123 WARN [org.infinispan.PERSISTENCE] (keycloak-cache-init) ISPN000554: jboss-marshalling is deprecated and planned for removal
2022-06-29 07:55:36,142 WARN [org.infinispan.CONFIG] (keycloak-cache-init) ISPN000569: Unable to persist Infinispan internal caches as no global state enabled
2022-06-29 07:55:36,155 INFO [org.infinispan.CONTAINER] (keycloak-cache-init) ISPN000556: Starting user marshaller 'org.infinispan.jboss.marshalling.core.JBossUserMarshaller'
2022-06-29 07:55:36,395 INFO [org.infinispan.CONTAINER] (keycloak-cache-init) ISPN000128: Infinispan version: Infinispan 'Triskaidekaphobia' 13.0.9.Final
2022-06-29 07:55:36,999 INFO [org.keycloak.connections.infinispan.DefaultInfinispanConnectionProviderFactory] (main) Node name: node_121444, Site name: null
2022-06-29 07:55:37,041 WARN [io.agroal.pool] (agroal-11) Datasource '<default>': FATAL: password authentication failed for user "bn_keycloak"
2022-06-29 07:55:37,212 ERROR [org.keycloak.quarkus.runtime.cli.ExecutionExceptionHandler] (main) ERROR: Failed to start server in (development) mode
2022-06-29 07:55:37,213 ERROR [org.keycloak.quarkus.runtime.cli.ExecutionExceptionHandler] (main) ERROR: Failed to obtain JDBC connection
2022-06-29 07:55:37,215 ERROR [org.keycloak.quarkus.runtime.cli.ExecutionExceptionHandler] (main) ERROR: FATAL: password authentication failed for user "bn_keycloak"
2022-06-29 07:55:37,215 ERROR [org.keycloak.quarkus.runtime.cli.ExecutionExceptionHandler] (main) For more details run the same command passing the '--verbose' option. Also you can use '--help' to see the details about the usage of the particular command. 

Edit: fixed it and created a stable deployment by setting up a manual postgres deployment and disabling the helm postgres in the chart. Now everything works as intended.

1 Like