How to change import strategy for Keycloak ob Kubernetes

Hi!

We are running Keycloak 18.0.2 for some time now on Kubernetes with Postgres. To save costs, I’d like to migrate it to MySQL (while also upgrading Keycloak to a recent version).

So I exported the master realm by importing a dump of the database locally and by running "kc.sh export … " on my machine. The exported files seem ok so far.

Now, I’m wondering how to import these files into my new Keycloak+MySQL. I put the exported data into a kubernetes configmap and attached it through a volume to the Keycloak pod, which I start with the option “–import-realm”. The import starts, but the import strategy is IGNORE_EXISTING. Can anybody telle me how to change it to OVERWRITE_EXISTING?

The output I’m getting is:

2025-01-18 21:42:53,290 INFO [org.keycloak.exportimport.dir.DirImportProvider] (main) Importing from directory /opt/keycloak/bin/…/data/import
2025-01-18 21:42:53,294 INFO [org.keycloak.services] (main) KC-SERVICES0030: Full model import requested. Strategy: IGNORE_EXISTING

Any help is appreciated!
Thanks,
Pascal

Made some progress:
I also installed mysql-server locally and ran the command “kc.sh import --dir” command. This seems to do an overwrite. But still, how would I do that in a Kubernetes environment?

Thanks,
best regards,
Pascal

Hello @ppaulis, you just have to change the pod container args. It should be something like this (for a pod, but very similar for a deployment):

apiVersion: v1
kind: Pod
# ...
spec:
  containers:
    - image: quay.io/keycloak/keycloak:26.1.0
      args: ["import", "--dir", "/path/to/your/import_file"]
      # ...
  restartPolicy: Never # or "OnFailure " if you want the import to restart when it fails

If anyone read this post later, everything we need to know about the Keycloak import/export is here : Importing and Exporting Realms - Keycloak

Thanks! @skydrinker-tox
Should have thought of that…

However, when running

[“import”, “–dir”, “/path/to/import/dir”]

I’m getting the following error:

quarkus.datasource.jdbc.driver is set to ‘org.h2.jdbcx.JdbcDataSource’ but it is build time fixed to ‘com.mysql.cj.jdbc.MysqlXADataSource’. Did you change the property quarkus.datasource.jdbc.driver after building the application?
2025-01-21 09:56:51,400 INFO [org.keycloak.quarkus.runtime.hostname.DefaultHostnameProvider] (main) Hostname settings: FrontEnd: https://xxxxxxxxxxxxxxxxxxxxx, Strict HTTPS: false, Path: , Strict BackChannel: false, Admin: , Port: -1, Proxied: true
2025-01-21 09:56:51,556 WARN [io.agroal.pool] (agroal-11) Datasource ‘’: No suitable driver found for jdbc:mysql://xxxxxxxxx.mysql.database.azure.com:3306/keycloak_staging?useSSL=false&characterEncoding=UTF-8

Is there a way to change this property in the given context (Kubernetes Deployment)?

Here’s the list of relevant env vars I’m currently using (terraform syntax):

env {
name = “KC_DB”
value = “mysql”
}

env {
name = “KC_DB_DRIVER”
value = “mysql”
}

env {
name = “KC_DB_URL”
value = “jdbc:mysql://${var.database_hostname}:3306/${var.database_name}?useSSL=false&characterEncoding=UTF-8”
}

Thanks!

When I do the import locally and then import the MySQL dump, I get the following when running Keycloak with “kc.sh star-dev”:

Updating the configuration and installing your custom providers, if any. Please wait.

ERROR: Failed to run ‘build’ command.

ERROR: io.quarkus.builder.BuildException: Build failure: Build failed due to errors

[error]: Build step org.keycloak.quarkus.deployment.KeycloakProcessor#checkJdbcDriver threw an exception: io.quarkus.runtime.configuration.ConfigurationException: Unable to find the JDBC driver (mysql). You need to install it.

If i’m not mistaken, MySQL is supported out-of-the-box?

Yes mySQL drivers are shipped with Keycloak (except oracle drivers) : (Configuring the database - Keycloak)
Regarding the error you get in kubernetes context, have you followed the recommandations about Running Keycloak in a container - Keycloak ? Do you use the official docker image (quay.io/keycloak/keycloak) or are you somehow deriving it or using another docker image ?

Sorry again, I’ve done some homework now on configuring Keycloak. I’m using now an optimized image and the MySQL connection now works. My procedure is the following:

  1. Starting on an empty database a kubernetes pod with command “import --dir /path/to/import/dir”. This creates all 87 tables. Restart policy is “Never” to avoid running it multiple times (see logs below).
  2. Removing the Pod and switching to a kubernetes deployment with command “start-dev”.

Logs from the single Pod are (I removed MySQL migration warnings) :

2025-01-21 17:08:24,748 ERROR [org.keycloak.quarkus.runtime.configuration.mappers.PropertyMappers] (main) Hostname v1 options [proxy] are still in use, please review your configuration
2025-01-21 17:08:28,478 INFO [org.keycloak.quarkus.runtime.storage.infinispan.CacheManagerFactory] (Thread-5) Starting Infinispan embedded cache manager
2025-01-21 17:08:28,710 INFO [org.infinispan.CONTAINER] (Thread-5) Virtual threads support enabled
2025-01-21 17:08:29,047 INFO [org.infinispan.CONTAINER] (Thread-5) ISPN000556: Starting user marshaller ‘org.infinispan.commons.marshall.ImmutableProtoStreamMarshaller’
2025-01-21 17:08:29,689 INFO [org.infinispan.transaction.lookup.JBossStandaloneJTAManagerLookup] (Thread-5) ISPN000107: Retrieving transaction manager Transaction: unknown
2025-01-21 17:08:30,176 INFO [org.keycloak.broker.provider.AbstractIdentityProviderMapper] (main) Registering class org.keycloak.broker.provider.mappersync.ConfigSyncEventListener

2025-01-21 17:10:38,347 WARN [io.agroal.pool] (main) Datasource ‘’: JDBC resources leaked: 2 ResultSet(s) and 0 Statement(s)
2025-01-21 17:10:38,735 INFO [org.keycloak.connections.infinispan.DefaultInfinispanConnectionProviderFactory] (main) Node name: node_433177, Site name: null
2025-01-21 17:10:39,086 INFO [org.keycloak.exportimport.dir.DirImportProvider] (main) Importing from directory /opt/keycloak/data/import
2025-01-21 17:10:39,096 INFO [org.keycloak.services] (main) KC-SERVICES0030: Full model import requested. Strategy: OVERWRITE_EXISTING
> 2025-01-21 17:11:59,001 INFO [org.keycloak.exportimport.util.ImportUtils] (main) Realm ‘master’ imported
> 2025-01-21 17:12:04,342 INFO [org.keycloak.exportimport.dir.DirImportProvider] (main) Imported users from /opt/keycloak/data/import/master-users-0.json
> 2025-01-21 17:12:07,048 INFO [org.keycloak.exportimport.dir.DirImportProvider] (main) Imported users from /opt/keycloak/data/import/master-users-3.json
> 2025-01-21 17:12:14,973 INFO [org.keycloak.exportimport.dir.DirImportProvider] (main) Imported users from /opt/keycloak/data/import/master-users-1.json
> 2025-01-21 17:12:19,392 INFO [org.keycloak.exportimport.dir.DirImportProvider] (main) Imported users from /opt/keycloak/data/import/master-users-2.json
2025-01-21 17:12:21,083 WARN [io.agroal.pool] (main) Datasource ‘’: JDBC resources leaked: 1 ResultSet(s) and 0 Statement(s)
2025-01-21 17:12:21,085 INFO [com.arjuna.ats.jbossatx] (main) ARJUNA032014: Stopping transaction recovery manager
2025-01-21 17:12:21,113 ERROR [org.keycloak.quarkus.runtime.cli.ExecutionExceptionHandler] (main) ERROR: Failed to start server in (nonserver) mode
2025-01-21 17:12:21,114 ERROR [org.keycloak.quarkus.runtime.cli.ExecutionExceptionHandler] (main) ERROR: Script upload is disabled
2025-01-21 17:12:21,114 ERROR [org.keycloak.quarkus.runtime.cli.ExecutionExceptionHandler] (main) For more details run the same command passing the ‘–verbose’ option. Also you can use ‘–help’ to see the details about the usage of the particular command.

As mentionned above, Database tables have been created.
The logs say that the master-realm & users have also been imported. However, the resulting Keycloak instance is completely empty. No clients, no scopes, no users, etc.

I’m guessing, that I missed again some crucial part here? :sweat_smile:

Thanks!

Are you sure your Keycloak pod connects to your freshly populated database ? What does the logs of your Keycloak instance says (try verbose option as mentionned in the logs of your importer pod)?

To simplify the import/export process, you can use a Kubernetes Job instead of creating a pod an removing it by yourself.

Either the same process switches to another database provider (the local database? H2?) after having done the migrations in the MySQL database, or it executes the imports and fails to commit them to the MySQL database perhaps? Could it be related to transaction management somehow?

@skydrinker-tox Tried to reproduce the behaviour locally on my computer :

With following config in “conf/keycloak.conf” :

db=mysql
db-username=root
db-password=password
db-url=jdbc:mysql://localhost/keycloak

./bin/kc.sh build --db=mysql

./bin/kc.sh import --dir=/home/user/keycloak/keycloak-26.1.0/data/import --override=true --verbose

The behaviour is momstly the same… tables are created, and the import says that it was succesful, but nothing gets imported. However, a difference is, that locally, table names are all uppercase now :

Logs from the import :

2025-01-23 10:53:41,115 INFO [org.keycloak.exportimport.dir.DirImportProvider] (main) Importing from directory /home/user/keycloak/keycloak-26.1.0/data/import
2025-01-23 10:53:41,119 INFO [org.keycloak.services] (main) KC-SERVICES0030: Full model import requested. Strategy: OVERWRITE_EXISTING
2025-01-23 10:54:19,918 INFO [org.keycloak.exportimport.util.ImportUtils] (main) Realm ‘master’ imported
2025-01-23 10:54:21,783 INFO [org.keycloak.exportimport.dir.DirImportProvider] (main) Imported users from /home/user/keycloak/keycloak-26.1.0/data/import/master-users-3.json
2025-01-23 10:54:24,260 INFO [org.keycloak.exportimport.dir.DirImportProvider] (main) Imported users from /home/user/keycloak/keycloak-26.1.0/data/import/master-users-2.json
2025-01-23 10:54:29,078 INFO [org.keycloak.exportimport.dir.DirImportProvider] (main) Imported users from /home/user/keycloak/keycloak-26.1.0/data/import/master-users-1.json
2025-01-23 10:54:32,223 INFO [org.keycloak.exportimport.dir.DirImportProvider] (main) Imported users from /home/user/keycloak/keycloak-26.1.0/data/import/master-users-0.json
2025-01-23 10:54:33,104 WARN [io.agroal.pool] (main) Datasource ‘’: JDBC resources leaked: 1 ResultSet(s) and 0 Statement(s)
2025-01-23 10:54:33,106 INFO [com.arjuna.ats.jbossatx] (main) ARJUNA032014: Stopping transaction recovery manager
2025-01-23 10:54:33,128 ERROR [org.keycloak.quarkus.runtime.cli.ExecutionExceptionHandler] (main) ERROR: Failed to start server in (nonserver) mode
2025-01-23 10:54:33,128 ERROR [org.keycloak.quarkus.runtime.cli.ExecutionExceptionHandler] (main) Error details:: java.lang.RuntimeException: Script upload is disabled

Sorry, I’ve tried to reproduce localy but I don’t have any issue. Here are the steps I followed :

  1. export realms by starting keycloak with the export command. As a result, I have one json file for the master realm and one for its users (this keycloak instance was using postgres db)
  2. change the appropriate config to make keycloak use a freshly started mysql db (i’m using containers localy). Here are the config I’ve changed :
    • KC_DB=mysql
    • KC_DB_URL=jdbc:mysql://mysql-db-host:3306/testdb
  3. start keycloak in start-dev mode (so I don’t need to rebuild an optimized image before run). The logs show that the database schema is created (even if I get warns about future deprecation during liquibase actions). Looking at the db after that : tables were created in uppercase, but that’s fine.
  4. stop keycloak, and restart it using import command. Everything goes well. Looking at the db after that : data has been imported (clients, users, etc …)
  5. run keycloak with start-dev mode : Keycloak uses the mysql db, and works fine (same as when it was using postgres).

Maybe step 3 is missing on your side (start keycloak with the empty mysql db so it creates the db objects like tables, constraints, etc, before running keycloak with the import command.

I must probably be doing wrong a simple, but crucial step…

I re-did all tests, also with a fresh export from my Postgres 26.1.0 installation. Both with a local installation on my Ubuntu machine and also with a docker compose setup :

services:
  keycloak_web:
    image: quay.io/keycloak/keycloak:26.1.0
    container_name: keycloak_web
    environment:
      KC_DB: mysql
      KC_DB_URL: jdbc:mysql://keycloakdb:3306/keycloak
      KC_DB_USERNAME: keycloak
      KC_DB_PASSWORD: password
      KC_HOSTNAME: localhost
      KC_HOSTNAME_PORT: 8080
      KC_HOSTNAME_STRICT: false
      KC_HOSTNAME_STRICT_HTTPS: false
      KC_LOG_LEVEL: info
      KC_METRICS_ENABLED: false
      KC_HEALTH_ENABLED: false
      KEYCLOAK_ADMIN: admin
      KEYCLOAK_ADMIN_PASSWORD: admin
    command: start-dev
    #command: import --dir /opt/keycloak/data/import
    #command: show-config
    depends_on:
      - keycloakdb
    ports:
      - "8080:8080"
    volumes:
      - ./import:/opt/keycloak/data/import

  keycloakdb:
    image: mysql:8.4.4
    container_name: keycloakdb
    restart: on-failure
    environment:
      MYSQL_DATABASE: 'keycloak'
      MYSQL_USER: 'keycloak'
      MYSQL_PASSWORD: 'password'
      MYSQL_ROOT_PASSWORD: 'password'

I also enabled the query log on the MySQL container. In fact, all INSERTs are there (scopes, users, …) but it seems that they are not committed to the database. I see a lot of set autocommit=0 and set autocommit=1 and at the end there are rollback commands. I’ll investigate more in this direction.

If you still have it, could you perhaps copy paste your docker-compose file…? To make sure there aren’t any important differences?

Doing start-dev and and import separately doesn’t change the outcome.
Using MySQL 8.0 or 8.4 also changes nothing.

-Dkeycloak.migration.action=import
-Dkeycloak.migration.provider=dir
-Dkeycloak.migration.dir=/opt/keycloak/data/import
-Dkeycloak.migration.strategy=IGNORE_EXISTING or OVERWRITE_EXISTING

Also there is a option to replace placeholders in your JSON file like secrets:
-Dkeycloak.migration.replace-placeholders=true

That will replace “${ENV_NAME}” form JSON with env variable called the same e.g.ENV_NAME

I can’t see any significant difference, but if you want, here is the compose file I’ve used for the test (just comment/uncomment the keycloak env variables depending on the database you want to plug to).
I also tried with your compose file, and had no issue with import, it works well.

services:
  kclk-service:
    image: 'quay.io/keycloak/keycloak:26.1.0'
    ports:
      - '8080:8080'
      - '8443:8443'
    restart: no
    depends_on:
      - pgdb-service
      - mysql-db-service
    volumes:
      - './mounted-dir/:/mounted-dir/'
    command: ['start-dev']
    #command: ['export', '--dir', '/mounted-dir/export-master/', '--realm', 'master']
    #command: ['import', '--override', 'true', '--dir', '/mounted-dir/export-master/']
    environment:
      - KEYCLOAK_ADMIN=admin
      - KEYCLOAK_ADMIN_PASSWORD=admin
      - KC_DB=postgres
      - KC_DB_URL=jdbc:postgresql://pgdb-service:5432/dbtest
      # - KC_DB=mysql
      # - KC_DB_URL=jdbc:mysql://mysql-db-service:3306/dbtest
      - KC_DB_USERNAME=dbadmin
      - KC_DB_PASSWORD=dbadmin
      # - KC_DB_USERNAME=root
      # - KC_DB_PASSWORD=mysqlrootpass
      - KC_HTTP_ENABLED=true
      - KC_HOSTNAME_STRICT=false
      - KC_HOSTNAME=http://localhost:8080

  pgdb-service:
    image: 'postgres:12.17'
    ports:
      - '5432:5432'
    restart: always
    environment:
      - POSTGRES_PASSWORD=dbadmin
      - POSTGRES_USER=dbadmin
      - POSTGRES_DB=dbtest
      - PGDATA=/var/lib/postgresql/pgdata
    volumes:
      - 'pgdata:/var/lib/postgresql/pgdata'

  mysql-db-service:
    image: mysql:latest
    restart: always
    ports:
      - '3306:3306'
    environment:
      - MYSQL_ROOT_PASSWORD=mysqlrootpass
      - MYSQL_DATABASE=dbtest
    volumes:
      - 'mysql-kclk-data:/var/lib/mysql'

volumes:
  pgdata:
  mysql-kclk-data:

I can import a new export containing only a some example data like a Realm role. My guess is then, that the problem lies somewhere in my exported data. I’ll try to empty it step by step, to see where it’s hanging.

It finally works now! The last missing piece was the “authorizationSettings” block in my export, which caused a “scrip upload is disabled” error during import. My guess is that this canceled the database transaction in the end. After removing this block, the import finished succesfully.

Thanks @skydrinker-tox for your help & patience!

Best regards,

1 Like