Database High Availability and Failover (2)

Have a two instance postgresql database cluster setup as one writer and the other in read only. A failover event changes the read only instance to write. How does one configure keycloak 15 to gracefully handle a failover from one instance to another instance. Currently the keycloak server logs PSQLException: This connection has been closed. And it never recovers or restores the connection after the failover.

Initially Keycloak was configured to use a single URL that would be updated during a failover event by pointing it to the address of the new active instance. I tried setting the networkaddress.cache.ttl to 10 in the gloval java.security file. No effect. Then, I tried setting -Djava.security.properties=/opt/rh-sso/java.security with the networkaddress.cache.ttl set to 10. No effect.

Next approach I tried was providing multiple hosts in the JDBC connection string and it did not respond to or handle a failover of the database between instances. And an unintended consequence was due to both instances normally being online with one write and the other read-only it would fail if/when the first host was in read only mode.

Is there a way to get keycloak to gracefully handle postgresql database failovers?

KC Version 15.0.8
OS Version RHEL 8.8
JDBC Version postgresql-42.2.24.jar

Thanks in advance

AFAIK Keycloak does not do this for you.

If I read your post correctly, you are using Red Hat SSO - so you have access to the (paid!) Red Hat Support. Why not just asking them, if they have a solution for your request?

1 Like

I have been waiting on the support information and access to it for our Redhat SSO for months.

Hi @davsum, if you are using Keycloak 15, it means that you are still using the Wildfly based distribution, what you are encountering is a quite common issue and it is not specific to Keycloak. Adding the following to your datasource configuration inside the standalone.xml file should help:

      <validation>
                        <check-valid-connection-sql>select 1</check-valid-connection-sql>
                        <validate-on-match>false</validate-on-match>
                        <background-validation>true</background-validation>
                        <background-validation-millis>10000</background-validation-millis>
         </validation>

by datasource I mean the section where the connection to the database is configured.

<datasource jndi-name="java:jboss/datasources/KeycloakDS" pool-name="KeycloakDS" enabled="true" use-java-context="true">
 <connection-url>jdbc:postgresql://YOUR_DB_URL/YOUR_DB_NAME</connection-url>
                    <driver>postgresql</driver>
                    <validation>
                        <check-valid-connection-sql>select 1</check-valid-connection-sql>
                        <validate-on-match>false</validate-on-match>
                        <background-validation>true</background-validation>
                        <background-validation-millis>10000</background-validation-millis>
                    </validation>
                  <!-- other stuff goes here -->
</datasource>

Thank you. I was going down the wildfly datasource configuration route. Good to know I was barking up the right tree. And was in the middle of filtering through the pile of half examples and examples applicable to different versions and database types. But your example did the trick.

Is there any need/benefit to configuring a connection pool? I ask because I added it as it was not there previously and was a step in several or the tutorials/examples.

Good to hear it worked out. Connection pooling is meant to improve performance. There is plenty of literature about it. I am also curious to know what are using for clustering postgres.

It is AWS RDS with the postgresql engine with a multi zone deployment (1 writer + 1 reader)

It seemed to recover the connection after performing a failover between the two rds instances. Thought I might have to do the ttl trick. But that doesn’t seem to be necessary.

1 Like