Keycloak HA in K8S Unexpected Downtime

Hi,

I am running a Keycloak cluster in kubernetes with HA configurations and JGroups Kubeping (most of the time my setup has 3 cluter nodes). This setup works fine in my uat cluster setups. But in my production setup, if one pod goes down (Either because of a new deployment, pod deletion or liveness probe failure) all other pods also goes to the unready state resulting a application downtime for about 3 mins.

Can someone please help me to identify the issue here? I can provide more information if required.

PS: I am using keycloak-6.0.1 for the setup(I know it is an older version and planning to upgrade). I am also using /auth/realms/master as readiness probe and /auth/ as liveness probe with 10s timeout.