Use of JDBC_PING with Keycloak 17 (Quarkus distro)

I opened a discussion on this here: JGroups JDBC_PING discovery protocol for Infinispan · Discussion #10177 · keycloak/keycloak · GitHub

I have many customers that use the JDBC_PING discovery protocol for JGroups/Infinispan because of network topology and convenience. I’m trying to help them upgrade to the new Quarkus distribution of Keycloak. A few issues I’ve encountered so far:

  • There is no “default” stack that uses JDBC_PING for discovery. I suppose it would be possible to use the conf/cache-ispn.xml file to add it, however
  • There is no datasource bound to JNDI for the Keycloak database, which means you’d have to put the database connection settings in the conf/cache-ispn.xml file, and
  • If you have to put the database connection settings in that file, you couldn’t really use the kc.sh build functionality, as you’d be building an “optimized” image that would only work for a specific environment, and you’d be embedding database credentials in that image (bad practice).

If anyone has more information on how to use JDBC_PING with Keycloak 17, please help. I’ll post more information here as I figure this out.

7 Likes

addition to the conf/cache-ispn.xml file:

...
  <jgroups>
    <stack name="jdbc-ping" extends="tcp">
      <JDBC_PING connection_driver="org.postgresql.Driver"
                 connection_username="keycloak" connection_password="password"
                 connection_url="jdbc:postgres://postgres/keycloak" level="debug"
                 initialize_sql="CREATE TABLE IF NOT EXISTS JGROUPSPING (own_addr varchar(200) NOT NULL, cluster_name varchar(200) NOT NULL, ping_data BYTEA, constraint PK_JGROUPSPING PRIMARY KEY (own_addr, cluster_name));"
                 info_writer_sleep_time="500"
                 remove_all_data_on_view_change="true"
                 clear_table_on_view_change="true"
                 stack.combine="REPLACE"
                 stack.position="MPING" />
    </stack>
  </jgroups>
...
1 Like

Update: Adding the stack to the infinispan config file seems to have no effect. There was an issue right before the 17.0.0 release that you couldn’t set a jgroups stack that was not the default, but it was supposed to be solved:

However, I can’t get the above addition to have any effect.

Update: With help from a few great community members and contributors, I was able to get a working example

Hope this helps others that are trying to get a working setup using JDBC_PING.

7 Likes

Thank you @xgp for providing your example. :+1:

As I’m not an infinispan expert (gladly? unfortunately? don’t know…), I have a question to the lifespan configuration. You are using value -1 and thus disabling lifespan? In the recent Keycloak versions, this was set to 900000000000000000, which is quite a time (years…).
Do you know which is right or wrong?

For me, disabling with -1 should be the same as using this high value. But the Keycloak folks introduced this high value somewhen between KC 10 and 12, I guess. This was communicated as “bug fix”. But why the haven’t set it to -1`

@dasniko I am also not an infinispan expert, but I have been using jgroups for ~15 years (ouch! I’m old). I don’t know the reason behind the change from -1 to 9000...., but I will look around to try an understand it. The lifespan in the file in my example was taken directly from the conf/cache-ispn.xml file in the 17.0.0 distribution.

2 Likes

Thanks for the info, @xgp
Then I will go with the default settings of -1.
Perhaps this was an error in previous Infinispan versions and is now fixed…!?

P.S.: I‘m also grumpy old… :wink:

Which would be the best way when using that xml to fill the data needed to make de DB connection on execution time?
We are using a docker image on ECS, so I cannot know that data when I build the image.

You can use environment variable replacement. I am doing the same thing (in ECS/Fargate) using the same environment variables specified for the docker image:

...
  <jgroups>
    <stack name="jdbc-ping-tcp" extends="tcp">
      <JDBC_PING connection_driver="org.postgresql.Driver"
                 connection_username="${env.KC_DB_USERNAME}" connection_password="${env.KC_DB_PASSWORD}"
                 connection_url="jdbc:postgresql://${env.KC_DB_URL_HOST}/${env.KC_DB_URL_DATABASE}"
                 initialize_sql="CREATE TABLE IF NOT EXISTS JGROUPSPING (own_addr varchar(200) NOT NULL, cluster_name varchar(200) NOT NULL, ping_data BYTEA, constraint PK_JGROUPSPING PRIMARY KEY (own_addr, cluster_name));"
                 info_writer_sleep_time="500"
                 remove_all_data_on_view_change="true"
                 stack.combine="REPLACE"
                 stack.position="MPING" />
    </stack>
  </jgroups>
...
3 Likes

Thank you so much for your help!!

Putting this here in case it will help someone else.

First off, @xgp 's gist was extremely helpful, thank you! I, however, was using it with MySQL instead of Postgres, and therefore had to make some slight tweaks to the jgroups stack:

<!-- driver and initialize_sql changed -->
<jgroups>
    <stack name="jdbc-ping-tcp" extends="tcp">
      <JDBC_PING connection_driver="com.mysql.cj.jdbc.Driver"
                 connection_username="${env.KC_DB_USERNAME}"
                 connection_password="${env.KC_DB_PASSWORD}"
                 connection_url="${env.KC_DB_URL}"
                 info_writer_sleep_time="500"
                 initialize_sql="CREATE TABLE IF NOT EXISTS JGROUPSPING (own_addr varchar(200) NOT NULL, cluster_name varchar(200) NOT NULL, ping_data VARBINARY(255), constraint PK_JGROUPSPING PRIMARY KEY (own_addr, cluster_name));"
                 remove_all_data_on_view_change="true"
                 stack.combine="REPLACE"
                 stack.position="MPING" />
    </stack>
  </jgroups>

KC_DB_URL is an environment variable injected at runtime and is a standard JDBC url (i.e., jdbc:mysql://<host>:<port>/keycloak)

3 Likes

Thanks a lot ! This is very useful as I am using mariadb.

However, any idea how I could access the db settings when using the keycloak.conf file and no environment variables ?

1 Like

I’m trying to run this configuration with Cloud Run and gcp cloud sql but looks like sessions are not shared across service instance. Looks like it’s trying to use localhost while it’s completely separate containers which can’t access each other via localhost.

Is there any other way to use Cloud Run and Keycloak in HA?

2022-03-31 17:38:21.949 CEST
keycloak-rz26l
2022-03-31 15:38:21,948 INFO [org.infinispan.CONTAINER] (keycloak-cache-init) ISPN000128: Infinispan version: Infinispan 'Triskaidekaphobia' 13.0.6.Final
Default
2022-03-31 17:38:22.341 CEST
keycloak-rz26l
2022-03-31 15:38:22,340 INFO [org.infinispan.CLUSTER] (keycloak-cache-init) ISPN000078: Starting JGroups channel `ISPN` with stack `jdbc-ping-tcp`
Default
2022-03-31 17:38:25.052 CEST
keycloak-rz26l
2022-03-31 15:38:25,051 WARN [org.jgroups.protocols.pbcast.GMS] (keycloak-cache-init) localhost-42236: JOIN(localhost-42236) sent to localhost-34025 timed out (after 2000 ms), on try 0
Default
2022-03-31 17:38:27.173 CEST
keycloak-rz26l
2022-03-31 15:38:27,171 WARN [org.jgroups.protocols.pbcast.GMS] (keycloak-cache-init) localhost-42236: JOIN(localhost-42236) sent to localhost-34025 timed out (after 2000 ms), on try 1
Default
2022-03-31 17:38:29.277 CEST
keycloak-rz26l
2022-03-31 15:38:29,275 WARN [org.jgroups.protocols.pbcast.GMS] (keycloak-cache-init) localhost-42236: JOIN(localhost-42236) sent to localhost-34025 timed out (after 2000 ms), on try 2
Default
2022-03-31 17:38:31.381 CEST
keycloak-rz26l
2022-03-31 15:38:31,380 WARN [org.jgroups.protocols.pbcast.GMS] (keycloak-cache-init) localhost-42236: JOIN(localhost-42236) sent to localhost-34025 timed out (after 2000 ms), on try 3
Default
2022-03-31 17:38:33.489 CEST
keycloak-rz26l
2022-03-31 15:38:33,488 WARN [org.jgroups.protocols.pbcast.GMS] (keycloak-cache-init) localhost-42236: JOIN(localhost-42236) sent to localhost-34025 timed out (after 2000 ms), on try 4
Default
2022-03-31 17:38:35.593 CEST
keycloak-rz26l
2022-03-31 15:38:35,592 WARN [org.jgroups.protocols.pbcast.GMS] (keycloak-cache-init) localhost-42236: JOIN(localhost-42236) sent to localhost-34025 timed out (after 2000 ms), on try 5
Default
2022-03-31 17:38:37.706 CEST
keycloak-rz26l
2022-03-31 15:38:37,705 WARN [org.jgroups.protocols.pbcast.GMS] (keycloak-cache-init) localhost-42236: JOIN(localhost-42236) sent to localhost-34025 timed out (after 2000 ms), on try 6
Default
2022-03-31 17:38:39.815 CEST
keycloak-rz26l
2022-03-31 15:38:39,813 WARN [org.jgroups.protocols.pbcast.GMS] (keycloak-cache-init) localhost-42236: JOIN(localhost-42236) sent to localhost-34025 timed out (after 2000 ms), on try 7
Default
2022-03-31 17:38:41.920 CEST
keycloak-rz26l
2022-03-31 15:38:41,918 WARN [org.jgroups.protocols.pbcast.GMS] (keycloak-cache-init) localhost-42236: JOIN(localhost-42236) sent to localhost-34025 timed out (after 2000 ms), on try 8
Default
2022-03-31 17:38:44.134 CEST
keycloak-rz26l
2022-03-31 15:38:44,132 WARN [org.jgroups.protocols.pbcast.GMS] (keycloak-cache-init) localhost-42236: JOIN(localhost-42236) sent to localhost-34025 timed out (after 2000 ms), on try 9
Default
2022-03-31 17:38:44.134 CEST
keycloak-rz26l
2022-03-31 15:38:44,132 WARN [org.jgroups.protocols.pbcast.GMS] (keycloak-cache-init) localhost-42236: too many JOIN attempts (10): becoming singleton
Default
2022-03-31 17:38:44.359 CEST
keycloak-rz26l
2022-03-31 15:38:44,357 INFO [org.infinispan.CLUSTER] (keycloak-cache-init) ISPN000094: Received new cluster view for channel ISPN: [localhost-42236|0] (1) [localhost-42236]
Default
2022-03-31 17:38:44.363 CEST
keycloak-rz26l
2022-03-31 15:38:44,362 INFO [org.infinispan.CLUSTER] (keycloak-cache-init) ISPN000079: Channel `ISPN` local address is `localhost-42236`, physical addresses are `[169.254.1.2:7800]`
Default
2022-03-31 17:38:45.302 CEST
keycloak-rz26l
2022-03-31 15:38:45,300 WARN [org.infinispan.factories.GlobalComponentRegistry] (keycloak-cache-init) ISPN000126: Attempted to stop() from FAILED state, but caught exception: java.lang.IllegalStateException: Shutdown in progress

This thread was extremely helpful – thanks everyone!

I’ll add that If you need to override the IP address of the Keycloak instance, this environment variable works:

JAVA_OPTS_APPEND:-Djgroups.external_addr=1.2.3.4

You would need to do this if the Keycloak container is operating behind a firewall that is port forwarding the replication traffic through to the container.

1 Like

It doesn’t work for me. I also try to set JGROUPS_DISCOVERY_EXTERNAL_IP env variable but it doesn’t work too.

How I can override external address for JDBCPING?

I found solution. I’ve added element TCP with attribute external_addr to stack element.
By the way, I am publishing the configuration for MSSQL

<stack name="jdbc-ping-tcp" extends="tcp">

      <TCP external_addr="${env.JGROUPS_DISCOVERY_EXTERNAL_IP}" />

      <JDBC_PING connection_driver="com.microsoft.sqlserver.jdbc.SQLServerDriver"

                 connection_username="${env.KC_DB_USERNAME}" connection_password="${env.KC_DB_PASSWORD}"

                 connection_url="jdbc:sqlserver://${env.KC_DB_URL_HOST};databaseName=${env.KC_DB_URL_DATABASE};sendStringParametersAsUnicode=false;${env.JDBC_PARAMS:}"

                 initialize_sql="IF NOT EXISTS (SELECT * FROM sysobjects WHERE name='JGROUPSPING' and xtype='U')

                 BEGIN

                 CREATE TABLE JGROUPSPING (own_addr varchar(200) NOT NULL, cluster_name varchar(200) NOT NULL, ping_data varbinary(5000), PRIMARY KEY CLUSTERED (own_addr, cluster_name));ALTER TABLE [JGROUPSPING] ADD DEFAULT (NULL) FOR [ping_data];

                 END"

                 info_writer_sleep_time="500"

                 remove_all_data_on_view_change="true"

                 stack.combine="REPLACE"

                 stack.position="MPING" />

    </stack>

Note that I pass the external addr through environment variable JGROUPS_DISCOVERY_EXTERNAL_IP. It’s convenient and easy to setup env variable to container.

1 Like

Hi @xgp ! I cannot get your configuration to work with PostgreSQL. Seems that it is just ignored and jgroups do not even try to connect to the database:

2022-06-14 00:31:49,881 INFO  [org.infinispan.CONTAINER[] (keycloak-cache-init) ISPN000128: Infinispan version: Infinispan 'Triskaidekaphobia' 13.0.8.Final
2022-06-14 00:31:51,080 INFO  [org.infinispan.CLUSTER[] (keycloak-cache-init) ISPN000078: Starting JGroups channel `ISPN` with stack `default-jdbc`
2022-06-14 00:31:51,590 DEBUG [org.jgroups.stack.Configurator[] (keycloak-cache-init) set property TCP.diagnostics_addr to default value /IP_ADDRESS_3
2022-06-14 00:31:51,681 DEBUG [org.jgroups.protocols.TCP[] (keycloak-cache-init) thread pool min/max/keep-alive: 0/200/60000 use_fork_join=false, internal pool: 0/4/30000 (1 cores available)
2022-06-14 00:31:51,690 DEBUG [org.jgroups.protocols.MPING[] (keycloak-cache-init) bind_addr=/IP_ADDRESS, mcast_addr=/IP_ADDRESS_2, mcast_port=46655
2022-06-14 00:31:51,770 DEBUG [org.infinispan.remoting.transport.jgroups.JGroupsTransport[] (keycloak-cache-init) JGroups protocol stack: TCP(time_service_interval=500;thread_pool_max_threads=200;ergonomics=true;port_range=10;conn_expire_time=0;recv_buf_size=0;loopback_copy=false;thread_naming_pattern=pl;use_common_fork_join_pool=false;send_buf_size=640000;tcp_nodelay=true;stats=true;buffered_input_stream_size=8192;suppress_time_different_version_warnings=60000;diagnostics_port_range=50;id=57;logical_addr_cache_max_size=2000;suppress_time_different_cluster_warnings=60000;max_length=0;bundler_wait_strategy=park;sock_conn_timeout=300;client_bind_port=0;use_fibers=false;bundler_num_spins=5;use_fork_join_pool=false;enable_diagnostics=false;external_port=0;log_discard_msgs=true;bind_addr=IP_ADDRESS;bundler_capacity=16384;log_discard_msgs_version=true;defer_client_bind_addr=false;use_ip_addrs=false;buffered_output_stream_size=8192;log_accept_error=true;thread_pool_keep_alive_time=60000;thread_dumps_threshold=10000;bind_port=7800;thread_pool_min_threads=0;diag_enable_udp=true;reaper_interval=0;diagnostics_ttl=8;loopback_separate_thread=true;logical_addr_cache_expiration=360000;peer_addr_read_timeout=1000;msg_processing_max_buffer_size=5000;linger=-1;diagnostics_addr=IP_ADDRESS_3;receive_on_all_interfaces=false;diag_enable_tcp=false;max_bundle_size=64000;thread_pool_enabled=true;bundler_type=transfer-queue;logical_addr_cache_reaper_interval=60000;diagnostics_port=7500;who_has_cache_timeout=2000;spawn_thread_on_full_pool=false)
:MPING(send_on_all_interfaces=false;async_discovery_use_separate_thread_per_request=false;ergonomics=true;receive_on_all_interfaces=false;mcast_port=46655;stagger_timeout=0;async_discovery=false;max_rank_to_reply=0;ip_ttl=2;mcast_addr=IP_ADDRESS_2;max_members_in_discovery_request=500;send_cache_on_join=true;num_discovery_runs=3;bind_interface_str=;break_on_coord_rsp=true;stats=true;use_disk_cache=false;bind_addr=IP_ADDRESS;discovery_rsp_expiry_time=60000;id=6;return_entire_cache=false)
:MERGE3(check_interval=48000;stats=true;min_interval=10000;ergonomics=true;id=38;max_participants_in_merge=100;max_interval=30000;only_coords_run_consistency_checker=false)
:FD_SOCK(cache_max_age=10000;get_cache_timeout=1000;sock_conn_timeout=1000;client_bind_port=0;ergonomics=true;start_port=0;port_range=50;suspect_msg_interval=5000;num_tries=3;stats=true;external_port=0;bind_addr=IP_ADDRESS;keep_alive=true;id=2;cache_max_elements=200)
:FD_ALL(use_time_service=true;stats=true;timeout_check_interval=1000;ergonomics=true;interval=2000;id=20;timeout=10000)
:VERIFY_SUSPECT(num_msgs=1;use_mcast_rsps=false;stats=true;ergonomics=true;bind_addr=IP_ADDRESS;id=11;timeout=1000;use_icmp=false)
:pbcast.NAKACK2(resend_last_seqno_max_times=1;use_mcast_xmit=false;ergonomics=true;xmit_table_msgs_per_row=1024;xmit_table_max_compaction_time=30000;become_server_queue_size=50;xmit_interval=200;resend_last_seqno=true;max_xmit_req_size=511600;discard_delivered_msgs=true;suppress_time_non_member_warnings=60000;xmit_table_num_rows=50;stats=true;xmit_from_random_member=false;log_discard_msgs=true;log_not_found_msgs=true;xmit_table_resize_factor=1.2;id=41;max_rebroadcast_timeout=2000;use_mcast_xmit_req=false)
:UNICAST3(ergonomics=true;xmit_table_msgs_per_row=1024;xmit_table_max_compaction_time=30000;ack_threshold=100;sync_min_interval=2000;max_retransmit_time=60000;xmit_interval=200;max_xmit_req_size=511600;conn_close_timeout=5000;conn_expiry_timeout=120000;xmit_table_num_rows=50;stats=true;xmit_table_resize_factor=1.2;log_not_found_msgs=true;id=48)
:pbcast.STABLE(stability_delay=6000;stats=true;ergonomics=true;desired_avg_gossip=5000;max_bytes=1000000;id=13;send_stable_msgs_to_coord_only=true)
:pbcast.GMS(max_join_attempts=10;print_local_addr=false;view_bundling=true;leave_timeout=2000;all_clients_retry_timeout=100;log_view_warnings=true;install_view_locally_first=true;ergonomics=true;use_delta_views=true;use_flush_if_present=true;print_view_details=true;print_physical_addrs=true;join_timeout=2000;max_leave_attempts=10;view_ack_collection_timeout=2000;stats=true;num_prev_views=10;merge_timeout=5000;max_bundling_time=50;num_prev_mbrs=50;id=12;log_collect_msgs=false;membership_change_policy=org.jgroups.protocols.pbcast.GMS$DefaultMembershipPolicy@43f43ef9)
:UFC(min_threshold=0.4;stats=true;ergonomics=true;max_credits=4000000;min_credits=1600000;id=31;max_block_time=500)
:MFC(min_threshold=0.4;stats=true;ergonomics=true;max_credits=4000000;min_credits=1600000;id=30;max_block_time=500)
:FRAG3(frag_size=60000;stats=true;ergonomics=true;id=62)
2022-06-14 00:31:51,793 DEBUG [org.jgroups.protocols.MPING[] (keycloak-cache-init) receiver thread started
2022-06-14 00:31:51,803 DEBUG [org.jgroups.protocols.pbcast.GMS[] (keycloak-cache-init) address=test-keycloak-x-2-37948, cluster=ISPN, physical address=IP_ADDRESS:7800
2022-06-14 00:31:53,870 INFO  [org.jgroups.protocols.pbcast.GMS[] (keycloak-cache-init) test-keycloak-x-2-37948: no members discovered after 2064 ms: creating cluster as coordinator

However this line means that the config was loaded correctly:

2022-06-14 00:31:51,080 INFO  [org.infinispan.CLUSTER[] (keycloak-cache-init) ISPN000078: Starting JGroups channel `ISPN` with stack `default-jdbc`

What am I doing wrong?

I have the setup as described in the gist using a PostgreSQL database, and don’t have a problem. Are you specifying the stack exactly as in the example? It looks from your logs like MPING isn’t getting replaced as the stack definition indicates. Can you post your full cache config file and your Dockerfile (or startup command)?

I am not using docker file to have the db conf setup, instead I am trying to use the cache-ispn.xml and keycloak.conf files to get this setup working. But for some reasons it doesn’t pick the added to the cache-ispn.xml file. Has anyone tried this before?

I had issues with JDBC_PING configuration added to cache-ispn.xml - It was simply not being picked up at all. After reading some other blogs here, I realized that I was missing the reference to the jgroups custom stack in <cache-container name="keycloak">. I.e. after adding

<transport lock-timeout="60000" stack="jdbc-ping-tcp"/>

where “jdbc-ping-tcp” is the name of the stack, everything is working great on AWS EC2!!!
Make sure you see the line in the log referencing the custom JDBC_PING stack:

INFO  [org.infinispan.CLUSTER] (keycloak-cache-init) ISPN000078: Starting JGroups channel `ISPN` with stack `jdbc-ping-tcp`

In case anyone else had problems with this - here’s my configuration. Hope it helps someone else.

cache-ispn-jdbc-ping.xml

<infinispan
        xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
        xsi:schemaLocation="urn:infinispan:config:11.0 http://www.infinispan.org/schemas/infinispan-config-11.0.xsd"
        xmlns="urn:infinispan:config:11.0">

    <jgroups>
        <stack name="jdbc-ping-tcp" extends="tcp">

            <!-- http://keycloak.discourse.group/t/use-of-jdbc-ping-with-keycloak-17-quarkus-distro/13571/4 -->
            <JDBC_PING connection_driver="org.postgresql.Driver"
                       connection_username="${env.KC_DB_USERNAME}" connection_password="${env.KC_DB_PASSWORD}"
                       connection_url="${env.KC_DB_URL}"
                       initialize_sql="CREATE TABLE IF NOT EXISTS JGROUPSPING (own_addr varchar(200) NOT NULL, bind_addr VARCHAR(200) NOT NULL, created timestamp NOT NULL, cluster_name varchar(200) NOT NULL, ping_data BYTEA, constraint PK_JGROUPSPING PRIMARY KEY (own_addr, cluster_name));"
                       insert_single_sql="INSERT INTO JGROUPSPING (own_addr, bind_addr, created, cluster_name, ping_data) values (?,'${jboss.bind.address:127.0.0.1}',NOW(), ?, ?);"
                       delete_single_sql="DELETE FROM JGROUPSPING WHERE own_addr=? AND cluster_name=?;"
                       select_all_pingdata_sql="SELECT ping_data FROM JGROUPSPING WHERE cluster_name=?;"
                       info_writer_sleep_time="500"
                       remove_all_data_on_view_change="true"
                       stack.combine="REPLACE"
                       stack.position="MPING"/>
        </stack>
    </jgroups>

    <cache-container name="keycloak">
        <!-- custom stack must be referenced by name in the stack attribute of the transport element -->
        <transport lock-timeout="60000" stack="jdbc-ping-tcp"/>

        <local-cache name="realms">
         <!-- ... Rest of file identical to default cache-ispn.xml -->

Dockerfile

FROM quay.io/keycloak/keycloak:19.0.1 as builder

ARG POSTGRES_DB_HOST
ENV KC_HEALTH_ENABLED=true
ENV KC_METRICS_ENABLED=true
ENV KC_DB=postgres
ENV KC_HTTP_RELATIVE_PATH=/auth
ENV POSTGRES_DB_HOST=$POSTGRES_DB_HOST
ENV KC_CACHE_CONFIG_FILE=cache-ispn-jdbc-ping.xml

# install custom themes
COPY themes/app/ /opt/keycloak/themes/app/

# add custom infinity cache configuration (if clustered)
ADD config/cache-ispn-jdbc-ping.xml /opt/keycloak/conf/cache-ispn-jdbc-ping.xml
RUN rm -f /opt/keycloak/conf/cache-ispn.xml

RUN /opt/keycloak/bin/kc.sh build --db=postgres
RUN /opt/keycloak/bin/kc.sh show-config

# ------------------------------------------------------------------------------------
FROM quay.io/keycloak/keycloak:19.0.1
COPY --from=builder /opt/keycloak/ /opt/keycloak/
WORKDIR /opt/keycloak

ENTRYPOINT /opt/keycloak/bin/kc.sh start --optimized --proxy=edge --db-url=$KC_DB_URL --spi-connections-jpa-default-migration-strategy=update
1 Like