Clustering not working in 16.1.1 as it used to in 15.0.2

In my production environment I am running Keycloak 15.0.2 a two node cluster. I am using a bitnami Kubernetes Helm chart.

With the following configuration the cluster boots and I can login. if I can see that the two nodes find each other using kube ping.

If I however boot up a second cluster in a different namespace, with different url, I get the login screen, but cannot login. I keep getting a message that there are too many redirects.

Also in the logs I can see the two nodes are not joined in a cluster.

Any help in fixing this problem would be appreciated.

replicaCount: 2
  adminUser: blaadmin
  adminPassword: "bla"
  createAdminUser: true
  managementUser: blamanager
  managementPassword: "blabla"
  enabled: true
proxyAddressForwarding: true
  enabled: true
  ingressClassName: "bla-prod"
  tls: true
  certManager: true
  annotations: {
# nginx, "letsencrypt-prod"
  type: ClusterIP
  create: true
      - labelSelector:
            - key: app
              operator: In
                - keycloak
        topologyKey: ""


Same problem! I can confirm that there is no joining messages on logs.

My current version is 11.0.1, and upgrade (it self) occurs without problems, but when I scale to 2 instances it stop to authenticate correctly.

Please note that current 11.0.1 version already works in cluster mode, also 16.1.1 version in single mode works.

Searched on migrations guides and can’t find any change on cluster configurations: Upgrading Guide

“TOO_MANY_REDIRECTS” error also confirmed, in logs: “LOGIN_ERROR” messages with “expired_code”.

Yes, that is what I am getting too, when I try to log into Keycloak when I spin up two pods. My production environment has Keycloak 15.0.2 and it works well… I don’t get it.

My test environment has this configuration and there it doesn’t work:
Keycloak version 16.1.1
Kubernetes version v1.23.4
Helm chart version bitnami/keycloak 6.2.4

Sorry, my fault.

We use DNS PING, in cluster configuration, pointing to internal service resolution (e.g. {service}.{namespace}.svc.cluster.local) and service name was changed, when new config was applied it stopped to find cluster nodes.

Service name corrected on DNS PING configuration, and it’s solved.

Maybe your problem is similar, as you changed namespace.

I’m using kube ping, so I think the namespace shouldn’t matter. I am using the same values.yaml.

Maybe I should try out dns ping

Well, I set up a new cluster, but instead of using flannel I am now using calico and the problem seems to have disappeared.

Changing from flannel to calico was the only difference in this cluster setup.