Reading the upgrade guide, I wasn’t able to understand how in practice keycloak server (or a cluster of servers) should be updated in containerized environment.
First of all, should the upgrade script be run at all? how? If using newer keycloak image for the new servers, what should be done? Should we expect downtime (since it’s stated that the server should be stopped before running the upgrade script, if it’s relevant at all when using docker images)
Second, should we let the new server version upgrade the DB automatically or should we manually upgrade the DB?
And Fourth, what about backward compatibility with the older server versions? From the upgrade guide it seems that the old servers stop working after the upgrade, I assume it’s due to the DB upgrade. Is there a way to prevent downtime while upgrading ?
4 Likes
Starting on this path.
Did you ever get a good process in place?
On TEST env
0-> Back-up KC DB
1-> Upgrade your KC base image version and run it, it will upgrade the KC BD automatically
2a (optional) → If you use external extensions (e.g. aerogear metrics), check if they still work, if not update them and redeploy (could require to rebuild your custom KC docker image)
2b (optional) → If you have developed custom extensions, check if they still work, if not update your dependencies, fix them and redeploy them (could require to rebuild your custom KC docker image)
3 (optional) → If you have developed custom themes, check if they still work, if not update your themes and redeploy them (could require to rebuild your custom KC docker image)
4 → Check if you DevOps tools still also work, if not update/fix them
On PROD env
Just repeat 0 + 1 with the maybe fixed KC image (if you have extensions or themes)
Note: It is always better to apply regularly updates in order to avoid having a big gap between versions.
2 Likes
@avner-hoffmann, @semangard, I believe you guys completed the Keycloak upgrade process successfully.
Can you please share that did you face any downtime in this process?
If yes, how long was that, and what were the main reasons behind that?
This information will help us to plan this process for us in a better way. Thanks.
Hi @atulchauhan01 : yes you have downtime because only one container/pod can upgrade the DB at a time
so you have to scale your containers to 1 and then launch the upgrade
usually the downtime should be the mapper of 5 minutes
(experienced on docker swarm)