Use of JDBC_PING with Keycloak 17 (Quarkus distro)

Hi , was wondering if you could help me out.
My settings and setup is exactly like yours.
I am also getting the same errors in logs.

Would you be able to let me know if you got around the issue and successfully clustered your machines?

Also , if possible , could you share how this issue was resolved ?

Thank you very much in advance.

Hi , I am a newbie to keycloak clustering.
I have been tasked to explore if setting up 2 nodes on AWS ec2 as a keycloak cluster is possible or not.

Have been reading and re-reading this and other related threads plus the doc.

I seem to have all the settings you describe but just can’t seem to get the cache n cluster members to sync up.

I am using a local postgresql server within the first node which is a larger ec2.

Wondering if you could provide some insights as to what other areas I might look into.

Thanks in advance !

hi @Himay45

I have the same problem as you. do you have any suggestions?

After some testing I’ve discovered that one node, the active cluster coordinator, is completely rewriting the cluster member state in the table; this is due to the “remove_all_data_on_view_change” option being set to true. As only one node is updating the table the bind_addr column is always set to the coordinatior IP address.

The ping_data column has a serialized object containing the connection data:
http://www.jgroups.org/javadoc/org/jgroups/protocols/PingData.html

I’m still keeping both the optional bind_addr and updated columns for troubleshooting purposes but they aren’t used for cluster communication.

@xgp I am trying to configure keycloak 20.0.5 & currently using jdbc ping for cluster discovery. We are using following jdbc ping configuration it is working. But I am not able to find if this is good discovery way for heavy load system? And is it good idea to use connection pooling in this configuration? I am not able to find mysql connection pool configuration for keycloak 20.0.5 (quarkas distro). Hoping to hear back from you.

> <jgroups>
>     <stack name="jdbc-ping-tcp" extends="tcp">
>       <JDBC_PING connection_driver="com.mysql.cj.jdbc.Driver"
>                  connection_username="${env.KC_DB_USERNAME}" connection_password="${env.KC_DB_PASSWORD}"
>                  connection_url="jdbc:mysql://${env.KC_DB_URL_HOST}/${env.KC_DB_URL_DATABASE}"
>                  initialize_sql="CREATE TABLE IF NOT EXISTS JGROUPSPING (own_addr varchar(200) NOT NULL, cluster_name varchar(200) NOT NULL, ping_data BLOB, constraint PK_JGROUPSPING PRIMARY KEY (own_addr, cluster_name));"
>                  info_writer_sleep_time="500"
>                  remove_all_data_on_view_change="true"
>                  stack.combine="REPLACE"
>                  stack.position="MPING" />
>     </stack>
>   </jgroups

This is just the connection for node discovery. The “load” on this connection will not change under a “heavy load system”. We use a similar thing for 5-7 nodes in a system with 10+ million users with multiple daily logins, and it is not a problem.

Are you experiencing a problem? If so, post your results here.

@xgp Thank you the response. I was getting too many connection error from the mysql node that was being used by this jdbc ping protocol. I was testing load on the system using jmeter. And was getting too many connection errors. So was having doubt if connection created by jgroup are getting closed or not.

Jemter Test Case Configurations:

  1. Total Parallel user 350
  2. delay between requests of each user 0ms

Total Keycloak Nodes : 2 (On prod we will have 6 nodes having same configuration)

Keycloak Infinispan config : We will have millions of user, And was wondering what would be the right keycloak configuration for infinispan.xml .

<?xml version="1.0" encoding="UTF-8"?>
<infinispan
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:schemaLocation="urn:infinispan:config:11.0 http://www.infinispan.org/schemas/infinispan-config-11.0.xsd"
    xmlns="urn:infinispan:config:11.0">

  <!-- custom stack goes into the jgroups element -->


  <jgroups>
    <stack name="jdbc-ping-tcp" extends="tcp">
      <JDBC_PING connection_driver="com.mysql.cj.jdbc.Driver"
                 connection_username="${env.KC_DB_USERNAME}" connection_password="${env.KC_DB_PASSWORD}"
                 connection_url="jdbc:mysql://${env.KC_DB_URL_HOST}/${env.KC_DB_URL_DATABASE}"
                 initialize_sql="CREATE TABLE IF NOT EXISTS JGROUPSPING (own_addr varchar(200) NOT NULL, cluster_name varchar(200) NOT NULL, ping_data BLOB, constraint PK_JGROUPSPING PRIMARY KEY (own_addr, cluster_name));"
                 info_writer_sleep_time="500"
                 remove_all_data_on_view_change="true"
                 stack.combine="REPLACE"
                 stack.position="MPING" />
    </stack>
  </jgroups>

  <cache-container name="keycloak" statistics="true">
    <!-- custom stack must be referenced by name in the stack attribute of the transport element -->
    <transport lock-timeout="60000" stack="jdbc-ping-tcp"/>
    <local-cache name="realms">
      <encoding>
        <key media-type="application/x-java-object"/>
        <value media-type="application/x-java-object"/>
      </encoding>
      <memory max-count="10000"/>
    </local-cache>
    <local-cache name="users">
      <encoding>
        <key media-type="application/x-java-object"/>
        <value media-type="application/x-java-object"/>
      </encoding>
      <memory max-count="10000"/>
    </local-cache>
    <distributed-cache name="sessions" owners="2"  statistics="true">
        <expiration lifespan="-1"/>
    </distributed-cache>
    <distributed-cache name="authenticationSessions" owners="2">
      <expiration lifespan="-1"/>
    </distributed-cache>
    <distributed-cache name="offlineSessions" owners="2">
      <expiration lifespan="-1"/>
    </distributed-cache>
    <distributed-cache name="clientSessions" owners="2"  statistics="true" >
      <expiration lifespan="-1"/>
    </distributed-cache>
    <distributed-cache name="offlineClientSessions" owners="2" >
      <expiration lifespan="-1"/>
    </distributed-cache>
    <distributed-cache name="loginFailures" owners="2" >
      <expiration lifespan="-1"/>
    </distributed-cache>
    <local-cache name="authorization">
      <encoding>
        <key media-type="application/x-java-object"/>
        <value media-type="application/x-java-object"/>
 <memory max-count="10000"/>
    </local-cache>
    <replicated-cache name="work">
      <expiration lifespan="-1"/>
    </replicated-cache>
    <local-cache name="keys">
      <encoding>
        <key media-type="application/x-java-object"/>
        <value media-type="application/x-java-object"/>
      </encoding>
      <expiration max-idle="3600000"/>
      <memory max-count="1000"/>
    </local-cache>
    <distributed-cache name="actionTokens" owners="2" >
      <encoding>
        <key media-type="application/x-java-object"/>
        <value media-type="application/x-java-object"/>
      </encoding>
      <expiration max-idle="-1" lifespan="-1" interval="300000"/>
      <memory max-count="-1"/>
    </distributed-cache>
  </cache-container>
</infinispan>

Did you discover anything on the connection part? I am using Postgres with the same config and Jgroup is creating too many additional connections than keycloak.

I was thinking if there is a way that Jgroup use some kind of connection pool where we can limit the max number of connections it can make.

@xgp if you have time , can you help here, I am also facing similar issue?

is there any way to change CredentialProvider for default user federation. I do want to migrate all my user but do not want to change there password.