I am impressed by the work done: congratulations to all the people working on the project
It’s much easier to set up and configure
I have seen this post:
In which dasniko explains there is no “domain” etc.
But I’m not sure I’m in the right direction to have 2 nodes (active/active or active/passive).
My need is simple:
worst case: in case of failure with my node 1, I would like to be up because of the node 2
nice case: to have an active/active cluster
What I did:
VM 1/3: I configured Nginx as reverse proxy + load balancer
VM 2/3: I installed/configured Keycloak 18.0.0
VM 3/3: I installed/configured Keycloak 18.0.0
my database is PostgreSQL:
master in VM 2/3
slave in VM 3/3
If I stop Keycloak in the VM 2/3, it seems it works: Nginx load balance to VM 3/3
But I have doubts…
Is it the good solution / approach?
How to deal with PostgreSQL database update?
For example should I configure spi-connections-jpa-default-migration-strategy:
to ‘update’ on VM 2/3
to ‘manual’ on VM 3/3
To avoid a conflict?
Is it possible to have an active / active system with this edition based on Quarkus ?
If yes, how?
Sorry maybe I mis a documentation or something somewhere…
But all documentations related to HA I found seems related to Wildfly
Last question.
With the console in the previous versions, it was possible to deploy a theme directly to all nodes for example.
Is it possible with Quarkus?
in my case I had problems because the server which host my Keycloak has more than one ip address and Keycloak was listening on the wrong IP address in a cluster context.
I had to force on which IP address to listen
how?
Edit the file /path/to/keycloak/conf/cache-ispn.xml
<infinispan
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="urn:infinispan:config:11.0 http://www.infinispan.org/schemas/infinispan-config-11.0.xsd"
xmlns="urn:infinispan:config:11.0">
<!-- CONFIGURATION FOR THE CLUSTER -->
<jgroups>
<stack name="company-ping-tcp" extends="tcp">
<TCP bind_addr="<ip address on which to listen>" bind_port="7800" />
<TCPPING
initial_hosts="<ip address server 1>[7800],<ip address server 1>[7800]"
port_range="0"
stack.combine="REPLACE"
stack.position="MPING"
/>
</stack>
<!-- company-ping-jdbc is no longer used -->
<stack name="company-ping-jdbc" extends="tcp">
<JDBC_PING connection_driver="org.postgresql.Driver"
connection_username="<PostgreSQL username>"
connection_password="<PostgreSQL password>"
connection_url="<PostgreSQL URL>"
initialize_sql="CREATE SCHEMA IF NOT EXISTS ${env.KC_DB_SCHEMA:public}; CREATE TABLE IF NOT EXISTS ${env.KC_DB_SCHEMA:public}.JGROUPSPING (own_addr varchar(200) NOT NULL, cluster_name varchar(200) NOT NULL, bind_addr varchar(200) NOT NULL, update>
insert_single_sql="INSERT INTO ${env.KC_DB_SCHEMA:public}.JGROUPSPING (own_addr, cluster_name, bind_addr, updated, ping_data) values (?, ?, '${env.JGROUPS_DISCOVERY_EXTERNAL_IP:127.0.0.1}', NOW(), ?);"
delete_single_sql="DELETE FROM ${env.KC_DB_SCHEMA:public}.JGROUPSPING WHERE own_addr=? AND cluster_name=?;"
select_all_pingdata_sql="SELECT ping_data, own_addr, cluster_name FROM ${env.KC_DB_SCHEMA:public}.JGROUPSPING WHERE cluster_name=?"
info_writer_sleep_time="500"
remove_all_data_on_view_change="true"
stack.combine="REPLACE"
stack.position="MPING" />
</stack>
</jgroups>
<!-- /CONFIGURATION FOR THE CLUSTER -->
<cache-container name="keycloak">
<!-- CONFIGURATION FOR THE CLUSTER -->
<transport lock-timeout="60000" stack="company-ping-tcp" />
<!-- /CONFIGURATION FOR THE CLUSTER -->
About the spi-connections-jpa-default-migration-strategy:
for the moment I configured the parameter with “update” no matter the node
@Kortex Hi, Thanks so much. I had the same question. I just have another question; According to this link, there are several ways to set up a keycloak cluster.
You mentioned both TCPPING and JDBC_PING configuration. I was wondering if you would know how to config the PING solution which uses UDP protocol.
Thanks to your message, I have seen I copied/pasted my configuration with 2 mistakes:
a block no longer used (the JDBC approach)
a bad value replacement: <ip address server 1> used 2 times
Here is the right block to keep:
<infinispan
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="urn:infinispan:config:11.0 http://www.infinispan.org/schemas/infinispan-config-11.0.xsd"
xmlns="urn:infinispan:config:11.0">
<!-- CONFIGURATION FOR THE CLUSTER -->
<jgroups>
<stack name="company-ping-tcp" extends="tcp">
<TCP bind_addr="<ip address on which to listen>" bind_port="7800" />
<TCPPING
initial_hosts="<ip address server 1>[7800],<ip address server 2>[7800]"
port_range="0"
stack.combine="REPLACE"
stack.position="MPING"
/>
</stack>
</jgroups>
<!-- /CONFIGURATION FOR THE CLUSTER -->
<cache-container name="keycloak">
<!-- CONFIGURATION FOR THE CLUSTER -->
<transport lock-timeout="60000" stack="company-ping-tcp" />
<!-- /CONFIGURATION FOR THE CLUSTER -->
About your question I’m sorry but I don’t know and unfortunately at this moment I don’t have the time to test it
I can’t promise but if at a moment I have the time I will test/search and publish results here.
If you find before, could you share your solution here please? It might help other people