How can isolate Infinispan between namespace in one Kubernetes cluster?

Currently, I have one Kubernetes with 2 namespaces: NS1 and NS2. I’m using jboss/keycloak Docker image.

I am operating 2 Keycloak instances in those 2 namespaces and I expect that will run independently. But it is not true for Infinispan caching inside Keycloak. I got a problem that all sessions of KC in NS1 will be invalidated many times when the KC pod in NS2 is being stated “Crash Loopback”.

The logs said as following whenever the “Crash Loopback” KC pod in NS2 tries to restart:

15:14:46,784 INFO [org.infinispan.CLUSTER] (remote-thread--p10-t412) [Context=clientSessions] ISPN100002: Starting rebalance with members [keycloak-abcdef, keycloak-qwerty], phase READ_OLD_WRITE_ALL, topology id 498

keycloak-abcdef is the KC pod in NS1 and keycloak-qwerty is the KC pod in NS2. So, the KC pod in NS1 can see and be affected by KC pod from NS2.

After researching, I see that Keycloak uses Infinispan cache to manage session data and Infinispan uses JGroups to discover nodes with the default method PING. I am assuming that this mechanism is the root cause of the problem “invalidated session” because it will try to contact other KC pods in the same cluster (even different namespaces) to do something like synchronization.

Is there any way that can isolate the working of Infinispan in Keycloak between namespaces?

Thank you!

The problem you are seeing comes from the discovery mechanism used by jgroups. When two keycloak pods end up in the same node, they will find each other, as, normally all pods in a node will have the same broadcast address.

What you can do is change the discovery mechanism to KUBE_PING or DNS_PING. If all your instances use those methods, they will find only instances in the same namespace. DNS_PING is the simpler of the two, but it needs a service of type headless.

KUBE_PING will need a service account, role and roleBinding. If you are not the cluster admin, you’ll probably won’t have permission to create those.

Example of DNS_PING configuration:

env:
  - name: JGROUPS_DISCOVERY_PROTOCOL
    value: dns.DNS_PING
  - name: JGROUPS_DISCOVERY_PROPERTIES
    value: dns_query=keycloak-headless
  - name: CACHE_OWNERS_COUNT
    value: "2"
  - name: CACHE_OWNERS_AUTH_SESSIONS_COUNT
    value: "2"

Where keycloak-headless is the service with Type headless.

1 Like

Hi @weltonrodrigo,
Thank you for your suggestion.
It sounds great to me but in the meantime, I’m also got a suggestion that I can use JDBC_PING in my situation to limit the discovery mechanism only doing its job for instances/nodes which are using the same database. What do you think about this solution?

I would say it’s great recommendation wildfly - How can isolate Keycloak Infinispan between namespaces in one Kubernetes cluster to prevent KC pod from discovering and synchronizing from one other - Stack Overflow

You don’t have any additional dependency with jdbc ping. Kube ping requires Kuberneres API dependency, dns ping requires DNS in the cluster. Anyway, you are the admin, so you can evaluate and accept risk of each additional dependency and increased complexity, where things can go wrong.

1 Like

I agree with this assessment. JDBC_PING only needs a preexistent table on the database.

This can become a problem to you if you are orchestrating several keycloak, as the migration scripts for keycloak will only create the tables (or relations) needed by keycloak, not by JDBC_PING.

So, your options:

  • create a headless service for DNS_PING
  • create role and rolebinding for KUBE_PING
  • create table (or relation) for JBDC_PING

choose your fighter :robot:

1 Like

This is wrong. JDBC_PING creates itself its own table, which is necessary to manage the node registries.

Do you have any doc on this? In my experience, activating JDBC_PING resulted in relation "jgroupsping" does not exist in Postgres.

I‘m using this in all my trainings/workshops, and also at various customers in production. It‘s being created on startup. There‘s also the attribute „initialize_sql“ (or similar) to adjust the create-table statement, …

Just the first hit on a google search:

A default table will be created at first connection, …

http://www.jgroups.org/javadoc/org/jgroups/protocols/JDBC_PING.html

2 Likes

Hi @dasniko,

I’m running jboss/keycloak:15.0.2 but it will not create the jgroupsping relation on startup and the error JGRP000145: Error updating JDBC_PING table: org.postgresql.util.PSQLException: ERROR: relation "jgroupsping" does not exist will be printed out.

But if I change the environment value from

ENV JGROUPS_DISCOVERY_PROTOCOL JDBC_PING

to

ENV JGROUPS_DISCOVERY_PROTOCOL JDBC_PING
ENV JGROUPS_DISCOVERY_PROPERTIES datasource_jndi_name=java:jboss/datasources/KeycloakDS,info_writer_sleep_time=500,initialize_sql="CREATE TABLE IF NOT EXISTS JGROUPSPING ( own_addr varchar(200) NOT NULL, cluster_name varchar(200) NOT NULL, created timestamp default current_timestamp, ping_data BYTEA, constraint PK_JGROUPSPING PRIMARY KEY (own_addr, cluster_name))"

the jgroupsping relation will be created as expected.

Do you think is there any reason that prevents jgroupsping from automatically creating? Or initialize_sql is mandatory in the configuration? Thank you for your support!

Also thanks a lot to @weltonrodrigo and @jangaraj for your sharing solution.

As you might see in your properties env var, the SQL is DB specific. The default SQL statements in JDBC_PING are MySQL dialect. If you use something different, it might fail. Most likely because of the byte-array data type in the initialize sql, which is different from DB to DB. You are using the “bytea” datatype, so I assume you are using Postgresql, and this doesn’t understand the MySQL datatype. There should be an error message in the logs of your database…

1 Like

Oh yes, I’m using Postgresql for Keycloak DB. It’s clear to me now.
Thank you for your explanation!

@weltonrodrigo Hi, I want to deploy Keycloak 19.0.3 on Quarkus framework on Kubernetes using KUBE_PING. So I have created a service account, role and roleBinding as this link mentioned. This is the configuration of my service account:

kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: keycloak-kubeping-pod-reader
rules:
- apiGroups: [""]
  resources: ["pods"]
  verbs: ["get", "list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: keycloak-kubeping-api-access
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: keycloak-kubeping-pod-reader
subjects:
- kind: ServiceAccount
  name: keycloak-kubeping-service-account
  namespace: <MY_NAMESPACE>

And this is the config of my Keycloak deployment:

apiVersion: v1
kind: Service
metadata:
  name: keycloak19-headless
spec:
  publishNotReadyAddresses: true
  clusterIP: None
  selector:
    app: keycloak19
  ports:
    - name: http
      port: 8080
      protocol: TCP
      targetPort: http
    - name: jgroups
      port: 7600
      protocol: TCP
      targetPort: jgroups
  sessionAffinity: None
  type: ClusterIP  
---
apiVersion: v1
kind: Service
metadata:
  name: keycloak19
  labels:
    app: keycloak19
spec:
  ports:
  - name: http
    port: 8080
    targetPort: 8080
  selector:
    app: keycloak19
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: keycloak19
  labels:
    app: keycloak19
spec:                           
  replicas: 2
  selector:
    matchLabels:
      app: keycloak19
  template:
    metadata:
      labels:
        app: keycloak19
    spec:
      serviceAccount: keycloak-kubeping-service-account
      serviceAccountName: keycloak-kubeping-service-account    
      containers:
      - name: keycloak19
        image: quay.io/keycloak/keycloak:19.0.3
        args: ["start-dev"]
        env:
        - name: KEYCLOAK_ADMIN
          value: "admin"
        - name: KEYCLOAK_ADMIN_PASSWORD
          value: *******
        - name: KC_PROXY
          value: "edge"
        - name: KC_DB
          value: "postgres"
        - name: KC_DB_URL
          value: "jdbc:postgresql://keycloak-postgresql:5432/keycloak"    
        - name: KC_DB_USERNAME
          value: "keycloak"
        - name: KC_DB_PASSWORD
          value: *******
        - name: KC_LOG_LEVEL
          value: "DEBUG"    
        - name: KC_CACHE
          value: "ispn"    
        - name: KC_CACHE_STACK
          value: "kubernetes"        
        - name: JGROUPS_DISCOVERY_PROTOCOL
          value: kubernetes.KUBE_PING
        - name: JGROUPS_DISCOVERY_PROPERTIES
          value: dump_requests=true
        - name: KUBERNETES_NAMESPACE
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: metadata.namespace
        ports:
        - name: http
          containerPort: 8080
        readinessProbe:
          httpGet:
            path: /realms/master
            port: 8080

When I logs the pod created by applying the above config, I keep facing the below error (dns_query can not be null or empty):

...
2022-10-17 06:06:00,448 ERROR [org.keycloak.quarkus.runtime.cli.ExecutionExceptionHandler] (main) ERROR: ISPN000085: Error while trying to create a channel using the specified configuration file: default-configs/default-jgroups-kubernetes.xml
2022-10-17 06:06:00,448 ERROR [org.keycloak.quarkus.runtime.cli.ExecutionExceptionHandler] (main) ERROR: dns_query can not be null or empty
...

I was wondering if you have any idea how I can solve this issue.

Thanks in advance.

Hi, in the quarkus distribution (keycloak 17+), distributed cache setup is much simpler on kubernetes.

Take a look at this doc: Configuring distributed caches - Keycloak

You need

KC_CACHE_STACK=kubernetes
JAVA_OPTS_APPEND=-Djgroups.dns.query=<headless-service> where headless-service is a headless service fully qualified name. A headless service named keycloak-headless in the foobar namespace will have the name keycloak-headless.foobar.svc.cluster.local

That’s all you need, you can remove all the JGROUPS_* and KUBERNETES_NAMESPACE variables.

Thanks so much for your response.