Keycloak Quarkus with 2 nodes

Hi everyone,

I’m testing Keycloak 18.0.0 with Quarkus.

I am impressed by the work done: congratulations to all the people working on the project :slight_smile:
It’s much easier to set up and configure :slight_smile:

I have seen this post:

In which dasniko explains there is no “domain” etc.

But I’m not sure I’m in the right direction to have 2 nodes (active/active or active/passive).
My need is simple:

  • worst case: in case of failure with my node 1, I would like to be up because of the node 2
  • nice case: to have an active/active cluster

What I did:

  • VM 1/3: I configured Nginx as reverse proxy + load balancer
  • VM 2/3: I installed/configured Keycloak 18.0.0
  • VM 3/3: I installed/configured Keycloak 18.0.0
  • my database is PostgreSQL:
    • master in VM 2/3
    • slave in VM 3/3

If I stop Keycloak in the VM 2/3, it seems it works: Nginx load balance to VM 3/3

But I have doubts…

Is it the good solution / approach?

How to deal with PostgreSQL database update?
For example should I configure spi-connections-jpa-default-migration-strategy:

  • to ‘update’ on VM 2/3
  • to ‘manual’ on VM 3/3
    To avoid a conflict?

Is it possible to have an active / active system with this edition based on Quarkus ?
If yes, how?

Sorry maybe I mis a documentation or something somewhere…
But all documentations related to HA I found seems related to Wildfly :confused:

Last question.
With the console in the previous versions, it was possible to deploy a theme directly to all nodes for example.
Is it possible with Quarkus?

Thank you very much :slight_smile:

I’m sorry to insist but does anyone know the answers please? :confused:

I answer to myself if it can help someone :wink:

Cluster in active/active:

  • is it possible: yes
  • why I had problem:
    • in my case I had problems because the server which host my Keycloak has more than one ip address and Keycloak was listening on the wrong IP address in a cluster context.
    • I had to force on which IP address to listen
  • how?
  1. Edit the file /path/to/keycloak/conf/cache-ispn.xml
  2. Replace:
<infinispan
        xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
        xsi:schemaLocation="urn:infinispan:config:11.0 http://www.infinispan.org/schemas/infinispan-config-11.0.xsd"
        xmlns="urn:infinispan:config:11.0">

    <cache-container name="keycloak">
        <transport lock-timeout="60000"/>

With:

<infinispan
        xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
        xsi:schemaLocation="urn:infinispan:config:11.0 http://www.infinispan.org/schemas/infinispan-config-11.0.xsd"
        xmlns="urn:infinispan:config:11.0">

    <!-- CONFIGURATION FOR THE CLUSTER -->
    <jgroups>
        <stack name="company-ping-tcp" extends="tcp">
            <TCP bind_addr="<ip address on which to listen>" bind_port="7800" />
            <TCPPING
                initial_hosts="<ip address server 1>[7800],<ip address server 1>[7800]"
                port_range="0"
                stack.combine="REPLACE"
                stack.position="MPING"
                />
        </stack>
        <!-- company-ping-jdbc is no longer used -->
        <stack name="company-ping-jdbc" extends="tcp">
            <JDBC_PING connection_driver="org.postgresql.Driver"
                        connection_username="<PostgreSQL username>"
                        connection_password="<PostgreSQL password>"
                        connection_url="<PostgreSQL URL>"
                        initialize_sql="CREATE SCHEMA IF NOT EXISTS ${env.KC_DB_SCHEMA:public}; CREATE TABLE IF NOT EXISTS ${env.KC_DB_SCHEMA:public}.JGROUPSPING (own_addr varchar(200) NOT NULL, cluster_name varchar(200) NOT NULL, bind_addr varchar(200) NOT NULL, update>
                        insert_single_sql="INSERT INTO ${env.KC_DB_SCHEMA:public}.JGROUPSPING (own_addr, cluster_name, bind_addr, updated, ping_data) values (?, ?, '${env.JGROUPS_DISCOVERY_EXTERNAL_IP:127.0.0.1}', NOW(), ?);"
                        delete_single_sql="DELETE FROM ${env.KC_DB_SCHEMA:public}.JGROUPSPING WHERE own_addr=? AND cluster_name=?;"
                        select_all_pingdata_sql="SELECT ping_data, own_addr, cluster_name FROM ${env.KC_DB_SCHEMA:public}.JGROUPSPING WHERE cluster_name=?"
                        info_writer_sleep_time="500"
                        remove_all_data_on_view_change="true"
                        stack.combine="REPLACE"
                        stack.position="MPING" />
        </stack>
    </jgroups>
    <!-- /CONFIGURATION FOR THE CLUSTER -->

    <cache-container name="keycloak">
        <!-- CONFIGURATION FOR THE CLUSTER -->
        <transport lock-timeout="60000" stack="company-ping-tcp" />
        <!-- /CONFIGURATION FOR THE CLUSTER -->

About the spi-connections-jpa-default-migration-strategy:

  • for the moment I configured the parameter with “update” no matter the node

About the theme deployment:

  • for the moment I deploy on each node

@Kortex Hi, Thanks so much. I had the same question. I just have another question; According to this link, there are several ways to set up a keycloak cluster.
You mentioned both TCPPING and JDBC_PING configuration. I was wondering if you would know how to config the PING solution which uses UDP protocol.

Thanks in advance

Hi @Himay45

Thanks to your message, I have seen I copied/pasted my configuration with 2 mistakes:

  • a block no longer used (the JDBC approach)
  • a bad value replacement: <ip address server 1> used 2 times

Here is the right block to keep:

<infinispan
        xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
        xsi:schemaLocation="urn:infinispan:config:11.0 http://www.infinispan.org/schemas/infinispan-config-11.0.xsd"
        xmlns="urn:infinispan:config:11.0">

    <!-- CONFIGURATION FOR THE CLUSTER -->
    <jgroups>
        <stack name="company-ping-tcp" extends="tcp">
            <TCP bind_addr="<ip address on which to listen>" bind_port="7800" />
            <TCPPING
                initial_hosts="<ip address server 1>[7800],<ip address server 2>[7800]"
                port_range="0"
                stack.combine="REPLACE"
                stack.position="MPING"
                />
        </stack>
    </jgroups>
    <!-- /CONFIGURATION FOR THE CLUSTER -->

    <cache-container name="keycloak">
        <!-- CONFIGURATION FOR THE CLUSTER -->
        <transport lock-timeout="60000" stack="company-ping-tcp" />
        <!-- /CONFIGURATION FOR THE CLUSTER -->

About your question I’m sorry but I don’t know and unfortunately at this moment I don’t have the time to test it :confused:
I can’t promise but if at a moment I have the time I will test/search and publish results here.
If you find before, could you share your solution here please? It might help other people :slight_smile:

@Kortex Thanks so much. I will check that

Hi @Himay45

I did a quick test.
It is not perfect: I have a lot of error in logs
But it works with this:

  • server 1:
    <!-- CONFIGURATION FOR THE CLUSTER -->
    <jgroups>
        <stack name="company-ping-udp" extends="udp">
            <UDP bind_addr="<IP server 1>" mcast_addr="239.6.7.8" mcast_port="46655" ip_mcast="true" />
            <MPING mcast_addr="239.6.7.8"
               mcast_port="46655"
               num_discovery_runs="3"
               ip_ttl="2" />
        </stack>
    </jgroups>
    <!-- /CONFIGURATION FOR THE CLUSTER -->

    <cache-container name="keycloak">
        <!-- CONFIGURATION FOR THE CLUSTER -->
        <transport lock-timeout="60000" stack="company-ping-udp" />
        <!-- /CONFIGURATION FOR THE CLUSTER -->
  • server 2:
    <!-- CONFIGURATION FOR THE CLUSTER -->
    <jgroups>
        <stack name="company-ping-udp" extends="udp">
            <UDP bind_addr="<IP server 2>" mcast_addr="239.6.7.8" mcast_port="46655" ip_mcast="true" />
            <MPING mcast_addr="239.6.7.8"
               mcast_port="46655"
               num_discovery_runs="3"
               ip_ttl="2" />
        </stack>
    </jgroups>
    <!-- /CONFIGURATION FOR THE CLUSTER -->

    <cache-container name="keycloak">
        <!-- CONFIGURATION FOR THE CLUSTER -->
        <transport lock-timeout="60000" stack="company-ping-udp" />
        <!-- /CONFIGURATION FOR THE CLUSTER -->

@Kortex Thanks so much for your response and sorry for the late response. I’ll check asap.

Using this I found out that the configuration for clustering with UDP-PING should be like below.
This configurations didn’t cause any error:

  • server 1:
  <jgroups>
<stack name="company-ping-udp" extends="udp">
<UDP bind_addr="${jgroups.bind.address,jgroups.udp.address:<IP_SERVER_1>}"
bind_port="${jgroups.bind.port,jgroups.udp.port:55200}"
mcast_addr="${jgroups.mcast_addr:230.0.0.4}"
mcast_port="${jgroups.mcast_port:45688}"
tos="0"
ucast_send_buf_size="1m"
mcast_send_buf_size="1m"
ucast_recv_buf_size="20m"
mcast_recv_buf_size="25m"
ip_ttl="${jgroups.ip_ttl:2}"
thread_naming_pattern="pl"
enable_diagnostics="false"
bundler_type="transfer-queue"
max_bundle_size="8500"

thread_pool.min_threads="${jgroups.thread_pool.min_threads:0}"
thread_pool.max_threads="${jgroups.thread_pool.max_threads:200}"
thread_pool.keep_alive_time="60000"

thread_dumps_threshold="${jgroups.thread_dumps_threshold:10000}" />
</stack>
  </jgroups>
  • server 2:
  <jgroups>
<stack name="company-ping-udp" extends="udp">
<UDP bind_addr="${jgroups.bind.address,jgroups.udp.address:<IP_SERVER_2>}"
bind_port="${jgroups.bind.port,jgroups.udp.port:55200}"
mcast_addr="${jgroups.mcast_addr:230.0.0.4}"
mcast_port="${jgroups.mcast_port:45688}"
tos="0"
ucast_send_buf_size="1m"
mcast_send_buf_size="1m"
ucast_recv_buf_size="20m"
mcast_recv_buf_size="25m"
ip_ttl="${jgroups.ip_ttl:2}"
thread_naming_pattern="pl"
enable_diagnostics="false"
bundler_type="transfer-queue"
max_bundle_size="8500"

thread_pool.min_threads="${jgroups.thread_pool.min_threads:0}"
thread_pool.max_threads="${jgroups.thread_pool.max_threads:200}"
thread_pool.keep_alive_time="60000"

thread_dumps_threshold="${jgroups.thread_dumps_threshold:10000}" />
</stack>
  </jgroups>

@Kortex Thanks so much for all your help