KeyCloak 19 with distributed cache


I have keycloak 19 running behind nginx reverse proxy using kubernetes setup currently with replica of 1 (i.e 1 pod). Everything is working fine, application, admin console.

When I try for HA to increase pod to 3, things start to fall apart during login process as I believe default distributed caching with no customization is behaving, and my sessions keep jumping all over the pods (my hunch).

Being a newbie to distributed caching in keycloak 19, how can I configure distributed caching for keycloak 19 ?

Going through forums, I stumbled upon where community was talking about jdbc_ping and then I stumbled upon this nice repo: GitHub - ivangfr/keycloak-clustered: keycloak-clustered extends Keycloak docker image. It allows to run easily a cluster of Keycloak instances 19.0.1 and in there I see that SQL statements used have “bind_addr” column, however I don’t see that column at all on my existing keycloak.

So; how can I configure keycloak 19 with distributed caching using jdbc_ping ?
Alternativly is there a better way to go and do it ?

From docs, I saw Configuring distributed caches - Keycloak, they have KC_CACHE_STACK and one of the values is “kubernetes”, would simply specifying stack value work out of the box or anything additional required ?

Thank you

I have been using keycloak 19 with distributed cache on kubernetes for quite some time now.

Those settings work for me: Configuring distributed caches - Keycloak

You’ll need a headless service pointing to your keycloak pods and use that in the -Djgroups.dns.query

1 Like

Thank you this worked like a charm. Did exactly as instructed

1 Like

There’s also the option of using the new keycloak operator. This will take care of the infinispan cache config for you when you increase instances.

1 Like