Realm cache in standalone-HA vs domain clustered mode

Hi to all. I’ve two keycloak server that are pointing to the same database.
the configuration is in standalone-ha.
I’ve noticed that if I made a change to an existent object in one server this modification is visible in the second server only after a restart. I immagine that is because the servers mantains a cache ad this cache is not shared between the two keycloak server.
but if it’s so which is the scope to have the standalone-HA mode?

to have the cache working between the two server I’ve to switch to domain clustered Mode?

thanks :wink:


In standalone-ha.xml, there is a section for the infinispan cache:
<subsystem xmlns="urn:jboss:domain:infinispan:X.Y">
with some sub-entries
<distributed-cache name="sessions" owners="X"/>
and similar for other types of sessions, action tokens and so on…
Make sure, the owners value is at least “2” (default is useless 1). Of cource, this has to be changed on every node’s standalone-ha.xml.

You don’t have to switch to domain clustered node to enable the shared cache.


1 Like

thanks for quick reply!

so in a standalone-ha cluster is sufficient to share the db to the two server and change in every node the owners=2.

is correct?

I’ve try but it doesn’t work; I’ve changed the value to 2 of all parameters but when I change an object on the second server the modification is not seen in the first until I restart the server.

What do you mean by „an object“? You have to be more precise in what you describe.
Additionally, if you read the section in standalone-ha.xml properly, you will see that there are different caches configures. Local and distributed ones. The caches with the local configuration are not distributed, obviously. No matter if you add there owners=2.

Sorry, I try to be more specific.
I try for example to enter in the admin ui of the first node and I change a property of a client (description, redirect uri, etc) . I save and in the first node the properties of the client are changed.
then I login in the second node and the properties that I changed in the first one are not changed. If I restart the second node then the properties are correctly show in the administration console.
I change the distributed cache in the standalone-ha.xml :

<subsystem xmlns="urn:jboss:domain:infinispan:12.0">
            <cache-container name="keycloak" modules="org.keycloak.keycloak-model-infinispan">
                <transport lock-timeout="60000"/>
                <local-cache name="realms">
                    <heap-memory size="10000"/>
                <local-cache name="users">
                    <heap-memory size="10000"/>
                <distributed-cache name="sessions" owners="2"/>
                <distributed-cache name="authenticationSessions" owners="2"/>
                <distributed-cache name="offlineSessions" owners="2"/>
                <distributed-cache name="clientSessions" owners="2"/>
                <distributed-cache name="offlineClientSessions" owners="2"/>
                <distributed-cache name="loginFailures" owners="2"/>
                <local-cache name="authorization">
                    <heap-memory size="10000"/>

is there any other parameters I’ve to change?

thanks for support :wink:

When you run a cluster, you don‘t log in to one node and then log in to another node. You have ONE cluster and in front of your nodes there must be a revers-proxy/load-balancer…

As you can see in the standalon-ha.xml, only objects around sessions are distributed. Everything else, what is stored in the database and what meta data is (like realm settings, client configs, user data, etc.), is NOT distributed. Every node reads this from the database and puts it in a local cache. The local caches can be invalidated through admin ui / admin api.

And, of course, make sure that your node discover each other and are able to communicate with each other.

Isn’t that so, that hibernate uses infinispan to distribute the information, which entity has to be invalidated from local caches because there was a DB update?

            <cache-container name="hibernate" modules="org.infinispan.hibernate-cache">
                <transport lock-timeout="60000"/>
                <local-cache name="local-query">
                    <heap-memory size="10000"/>
                    <expiration max-idle="100000"/>
                <invalidation-cache name="entity">
                    <transaction mode="NON_XA"/>
                    <heap-memory size="10000"/>
                    <expiration max-idle="100000"/>
                <replicated-cache name="timestamps"/>

Maybe I am mistunderstranding something here…

For my setup, I had to add the following parameters to the standalone startup-script to get infinispan work:

--server-config=standalone-ha.xml -bprivate={NODE_X_IP}{NODE_X_NAME}{NODE_X_ID}

…where X is your node number.

Yes, the information which entites to invalidate is distributed, not the data itself.
Infinispan should be set up properly, of course, regarding discovery, communication, node-names, etc… Which parameters need which values, depends heavily on the environment.

yep. maybe a failure in this “information distribution” causes the second node to display the old (cached) value instead of refreshing it from DB…

Thanks to all for explanation. I try to clarify better.
In my environment I will have a load balancer in front of two keycloak instances but now I’m testing the two instances to verify the cache. I’m using the standalone clustered mode, so I’ve two instances one postgresql db shared.
At this point I’m asking if there is a standard set of configuration that sould be use in a simple case like this. my scope is to invalidate the cache when I modify something (client, configuration, etc) in the admin ui in one node since when I will use the environment behind a load balancer I have to be sure that the cache of both nodes should be updated.
from this my initial answer, in keycloak documentation I can’t find nothing in standalone-ha section regarding cache. where can I find some information to configure in the better way the environment?

thanks again!


If you change something in the admin UI then all instances in the cluster should be notified of the change.

My suggestion would be to check if your Instances actually discovered each other in the cluster. There are multiple ways to set up discovery. Also make sure that the nodes ‘see’ each other on the network.

For example if you want a multicast setup you might want to check that your private interface is bound

There are other methods to configure a cluster. The following link should provide an overview