Hello,
I’ve installed Keycloak (21.0.2) with production mode enabled having 3 replicas using Bitnami helm chart.
I’m not sure if the cluster is correctly configured as of HA. I can see the logs for rebalancing of keycloak nodes.
I’m facing one issue, for testing I’ve executed script which creates 100 realms. After executing script I’ve manually deleted the pod, at that time script has given 502 and 504 error for that specific realm. How can I solve this?
Please let me know how can I verify my cluster is HA and solve the above issue.
Thanks
This answer is related to Is external infinispan required for Keycloak HA? - #2 by weltonrodrigo, but specific to bitnami:
Taking a look into the bitnami chart, I see that:
So, 2 replicas of this pod will find each other and form a cluster out of the box, with no intervention necessary.
It’s important to note that CLUSTERED MODE DEPENDS ON A EXTERNAL DATABASE.
So, if you are just running the Keycloak instances, you’ll need to configure them to talk to a single database. Not sure how to do that in the bitnami chart, but should be pretty straightforward.
Thanks for quick response @weltonrodrigo.
I have installed standalone Postgresql. All 3 replicas are talking to same DB.
@weltonrodrigo, One more question regarding this.
Suppose if we only used HA without infinispan, then where will the user sessions will get stored?
I tried creating test realm and few users then used Test application - Keycloak to test my setup by logging into multiple users.
I have delete the pods individually but when I checked in console user session was still present after deleting the pods itself.
I have 3 replicas and I have not set CACHE_OWNERS property. I checked cache-ispn.xml inside pods, it shows owner as 2.
Can you please comment on how will sessions will get stored and when will be they will get wiped out?
Thanks
Sessions are stored in the cache. They will be wiped out when the last standing instance is deleted.
Meaning that if you set replicas to zero, the cluster will stop existing when the last pod is stopped.
You can configure infinispan to persist data to the disk, but for that you’ll need a custom configuration (not necessarily external infinispan, but customized).
You probably don’t need that, as in theory, you always have some pod running. When a new pod starts, it joins the cluster and data is redistributed.
Thanks @weltonrodrigo for clearing it.
I tried using external infinispan, but somehow it is giving issues. I’m facing similar issues as of Query related to Keycloak HA with separate Infinispan cache .
Personally, I don’t see much benefit in using an external infinispan:
- Persistence: as long as you have a running pod, the sessions will persist
- Upgrades: keycloak doesn’t support upgrades without wiping sessions (it can work, but is not guaranteed), even using external infinispan (I think).
So, you’ll have the hassle of maintaining your own infinispan cluster without much of the benefit.
Note that I’m under the assumption that you are not dealing with some kind of gigazilla, multi-tier, multi-tenant deployment of keycloak. If that is the case, then an external infinispan can make sense, instead of several mini embedded clusters.
1 Like