Running multiple keycloak pod replicas break the Admin Console UI

I have setup keycloak v20.0 with bitnami chart.
When I setup multiple replicas of Keycloak, Admin Console UI fails to load and multiple CSS and js gets 404.
After digging into it found that name of the resources folder inside keycloak/data/tmp/kc-gzip-cache/ is different on every pod.
Due to this when visit the link of CSS, I get 200 once and 404 multiple times.

I also tried using --auto-build option but did not work.

Any pointers to fix this?

Have you configured to run a Keycloak Cluster or just multiple replicas?
Only by increasing the replica count, without configuring a Keycloak cluster explicitly, it won’t work.
Additionally, as you write that each pod has a different resource folder path part, I guess you don’t run your cluster with a common external database, which is essential for running a cluster.
Without telling us about your configuration, we can’t be of good help.

Thanks @dasniko for reply.
I faced this issue in Keycloak v20 only. It works in v21.
Also I did not set different resource folder path. They were getting created automatically.

Btw, I found fix for v21 issue. So now no issues :slight_smile:

Hi, I am encountering this issue as well. Keycloak v20.0.5. I am not ready to upgrade to v21 or higher yet as I am trying to get this working after upgrading from v19 where I previously had multiple replicas configured under the Wildfly distribution. After upgrading to v20 successfully and verifying the UI works fine with a single replica, I started reading the cache related docs to determine how to recreate the multi-replica configuration under Quarkus, however I am stuck with this issue.

I followed Configuring distributed caches - Keycloak as best as I could but I am not able to get a proper UI now. The symptom is the same as described by OP.

In my dockerfile, I edit the /opt/keycloak/conf/cache-ispn.xml file to change owners to 3 and I do /opt/keycloak/bin/kc.sh build --db=postgres --cache-stack=kubernetes

In my kubernetes manifest, I added an env variable JAVA_OPTS_APPEND with value -Djava.io.tmpdir=/tmp -Djgroups.bind.addr=${HOST_IP} -Djgroups.bind.port=7600 -Dgroups.dns.query=keycloak-headless -XX:+UseParallelGC where ${HOST_IP} is a variable which is set by Kubernetes based on the Pod IP:

        - name: HOST_IP
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: status.podIP

I also created the keycloak-headless service in the same namespace. The only difference is I did not yet scale up my replicas. Keycloak was working fine as a single replica before I made the above changes and redeployed (still as a single replica). I tried to access the admin panel before scaling up the replicas but ran into the issue.

Turns out it was unrelated. Keycloak logs showed /opt/keycloak/data/tmp didn’t exist and was not writable, so I created a writable volume for it.