Our team is trying to deploy the Keycloak 25.0 image in Azure Container Apps, and we have encountered many issues, especially with exposing the admin console using Azure Container Apps’ built-in ingress.
We need to know if Keycloak can work on container App or its not supported, as there is no offcial documentation.
And if someone can help on this erreur:
upstream connect error or disconnect/reset before headers. retried and the latest reset reason: connection termination
We have battled extensively with keycloak to make it work in Azure. We now have it working, fully clustered, Postgres connected, with health, metrics and admin console all working. Wasn’t easy as documentation is badly lacking.
May I recommend you run the kc.sh build AFTER you set the database settings. In your case, it’ll be right at the end.
Try adding these at the end of your Dockerfile before your run the kc.sh build:
Please what container solution did you use ? Kub Cluster or Azure Container App.
In our case, we use azure container app, and the application is UP, but we find problem to expose in port 8443, or Maybe Azure Container app ingress is not well configured.
upstream connect error or disconnect/reset before headers. retried and the latest reset reason: connection termination
Definitely Azure Container App. We tried everything, from VM’s, web apps, KT8 to Azure functions. All worked, except… each had their own flaws. So far, Container App works the best.
Expose port 8080. And, when you’re ready, port 9000 too.
You don’t need SSL on KC - that’s provided so elegantly on your ingress LB.
I currently have my test cluster up and running as an Azure Container App now. I will post a minimal example later when I have time.
@uvznab how do you configured your infinispan cluster? Currently I’m using AzurePing (aka discovery via Azure Blob Storage), but it would be nice to get rid of that dependency.
We’re using JDBCPing on Azure through the Postgres database. It has proven less complicated than AzurePing, more reliable, but mostly, cheaper (assuming you need the database regardless).
Here’s what we did:
Created a cache-ispn-jdbc-ping.xml file with our database setting
Modified the Dockerfile to copy that xml to /opt/keycloak/conf
Set the env var cache-config-file=cache-ispn-jdbc-ping.xml
To be sure, we ran keycloak with infinispan logs on to make sure it’s working fine (KC_LOG_LEVEL=org.infinispan:DEBUG,org.jgroups:DEBUG)
If there’s interest, I’ll publish a full writeup on each step.
That’s part of the beauty in JDBCPing: no more that TCP rubbish.
There’s no bind_addr, bind_port and the likes.
Instead, you have a connection_url as a connection string pointing to your postgres database, and you’re set.
I’m using AzurePing as discover protocol and TCP as transport protocol. I would guess you do the same.
But that leaves the issue that the TCP transport protocol can bind to one of two IP addresses (at least in my containers); 100.x.x.x or 169.x.x.x, but only 100.x.x.x is reachable by the other replicas (I’m using Azure Container Apps btw).
You just had to get technical, didn’t you… Fine
While JDBC is technically discovery protocol, it’s not a network-based one. The discovery is done via a table on the shared database. And yes, it technically needs transport to connect to a database, but in this protocol, that layer is implied because of the URL/connection string to the database. By setting the URL, the transport layer has only one possible route to reach it, therefore, one binded interface.
For any of the MPing protocols that utilise multicast, explicit binding is mandatory. That’s why you see it in most ping protocols.
See required settings of JDBCPing here.
From memory, Azure used to restrict IGMP so multicast wouldn’t work on their services (Service Fabric, VM, Webapps, etc) but they might allow it now on ContainerApps. If you get it to work, let me know. That’ll be very interesting!