After some testing I’ve discovered that one node, the active cluster coordinator, is completely rewriting the cluster member state in the table; this is due to the “remove_all_data_on_view_change” option being set to true. As only one node is updating the table the bind_addr column is always set to the coordinatior IP address.
@xgp I am trying to configure keycloak 20.0.5 & currently using jdbc ping for cluster discovery. We are using following jdbc ping configuration it is working. But I am not able to find if this is good discovery way for heavy load system? And is it good idea to use connection pooling in this configuration? I am not able to find mysql connection pool configuration for keycloak 20.0.5 (quarkas distro). Hoping to hear back from you.
This is just the connection for node discovery. The “load” on this connection will not change under a “heavy load system”. We use a similar thing for 5-7 nodes in a system with 10+ million users with multiple daily logins, and it is not a problem.
Are you experiencing a problem? If so, post your results here.
@xgp Thank you the response. I was getting too many connection error from the mysql node that was being used by this jdbc ping protocol. I was testing load on the system using jmeter. And was getting too many connection errors. So was having doubt if connection created by jgroup are getting closed or not.
Jemter Test Case Configurations:
Total Parallel user 350
delay between requests of each user 0ms
Total Keycloak Nodes : 2 (On prod we will have 6 nodes having same configuration)
Keycloak Infinispan config : We will have millions of user, And was wondering what would be the right keycloak configuration for infinispan.xml .
Did you discover anything on the connection part? I am using Postgres with the same config and Jgroup is creating too many additional connections than keycloak.
I was thinking if there is a way that Jgroup use some kind of connection pool where we can limit the max number of connections it can make.