Keycloak 19.0.2, lua-resty-openidc and invalid_request

Hi,
I’ve installed Keycloak v19.0.2 on my kubernetes cluster using codecentric keycloakx helm chart and all is working fine (acess to the admin console is working perfectly). The server, is accessible through Openresty server on 2 domains: keycloak.local (full access from the vpn) and auth.mydomain.com for external and limited access.
The public access is configured with the following code:

server {
  listen 443 ssl;
  server_name auth.mydomain.com;

  ssl_certificate     /etc/ssl/private/mydomain.crt;
  ssl_certificate_key /etc/ssl/private/mydomain.key;

  access_log /usr/local/openresty/nginx/logs/keycloak_access.log;
  error_log  /usr/local/openresty/nginx/logs/keycloak_error.log debug;

  location ~ ^/auth/(resources|realms|js)/(.*)$ {
    include /etc/openresty/conf.d/inc/proxy.conf;
    proxy_pass http://upstream_ingress/auth/$1/$2; # kubernetes ingress
  }
}

I’m trying to expose and protect an internal application by the following code:

server {
  listen 443 ssl;
  server_name myapp.mydomain.com;

  ssl_certificate     /etc/ssl/private/mydomain.crt;
  ssl_certificate_key /etc/ssl/private/mydomain.key;

  access_log /usr/local/openresty/nginx/logs/myapp_access.log;
  error_log  /usr/local/openresty/nginx/logs/myapp_error.log;

  set $session_storage cookie;
  set $session_cookie_persistent on;
  set $session_cookie_secure on;
  set $session_cookie_httponly on;
  set $session_cookie_samesite Strict;

  server_tokens off;

  access_by_lua '
    local opts = {
      redirect_uri = "/auth/redirect_uri",
      accept_none_alg = false,
      renew_access_token_on_expiry = true,
      discovery = "https://auth.mydomain.com/auth/realms/myrealm/.well-known/openid-configuration",
      token_endpoint_auth_method = "client_secret_basic",
      client_id = "myapp",
      client_secret = "--redacted-secret---",
      logout_path = "/logout",
      redirect_after_logout_with_id_token_hint = true,
      scope = "openid mail profile",
      accept_unsupported_alg = false,
      renew_access_token_on_expiry = true,
      revoke_tokens_on_logout = true,
      ssl_verify = "no",
      redirect_uri_scheme = "https",
      session_contents = {id_token=true}
    }

    local oidc = require("resty.openidc")

    -- call authenticate for OpenID Connect user authentication
    local res, err = oidc.authenticate(opts)
    if err then
      ngx.status = 403
      ngx.exit(ngx.HTTP_FORBIDDEN)
    end
    ';

  location = /logout {
    return 200 'Logout done. <a href="/">Login again</a>';
    add_header Content-Type text/html;
  }

  location / {
    include /etc/openresty/conf.d/inc/proxy.conf;
    proxy_pass http://upstream_app/;
  }
}

When I try to connect to the server, I’m being redirected to keycloak but I’m stuck there with “We’re sorry … Invalid request” where the URL is https://auth.mydomain.com/auth/realms/myrealm/protocol/openid-connect/auth?response_type=code&client_id=my-app&state=b18565c6d6d0a9fcbbcb177593fb39c3&redirect_uri=https%3A%2F%2Fmyapp.mydomain.com%2Fauth%2Fredirect_uri&nonce=d5769f9a89ed1bfeeb0ac9f6d423f66e&scope=openid

On the keycloak logs, I can see:

2022-09-24 22:25:48,066 WARN  [org.keycloak.events] (executor-thread-13) type=LOGIN_ERROR, realmId=XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX, clientId=null, userId=null, ipAddress=YY.YY.YY.YY, error=invalid_request

While on Openresty side, I see:

2022/09/24 20:15:16 [debug] 3150#3150: *1 [lua] openidc.lua:1511: authenticate(): session.present=nil, session.data.id_token=false, session.data.authenticated=nil, opts.force_reauthorize=nil, opts.renew_access_token_on_expiry=true, try_to_renew=true, token_expired=false
2022/09/24 20:15:16 [debug] 3150#3150: *1 [lua] openidc.lua:560: openidc_discover(): openidc_discover: URL is: https://auth.mydomain.com/auth/realms/OPTV/.well-known/openid-configuration
2022/09/24 20:15:16 [debug] 3150#3150: *1 [lua] openidc.lua:566: openidc_discover(): discovery data not in cache, making call to discovery endpoint
2022/09/24 20:15:16 [debug] 3150#3150: *1 [lua] openidc.lua:434: openidc_configure_proxy(): openidc_configure_proxy : don't use http proxy
2022/09/24 20:15:16 [debug] 3150#3150: *1 [lua] openidc.lua:579: openidc_discover(): response data: {"issuer":"http://auth.mydomain.com/auth/realms/OPTV","authorization_endpoint":"http://auth.mydomain.com/auth/realms/OPTV/protocol/openid-connect/auth","token_endpoint":"http://auth.mydomain.com/auth/realms/OPTV/protocol/openid-connect/token","introspection_endpoint":"http://auth.mydomain.com/auth/realms/OPTV/protocol/openid-connect/token/introspect","userinfo_endpoint":"http://auth.mydomain.com/auth/realms/OPTV/protocol/openid-connect/userinfo","end_session_endpoint":"http://auth.mydomain.com/auth/realms/OPTV/protocol/openid-connect/logout","myapp.mydomain.comontchannel_logout_session_supported":true,"myapp.mydomain.comontchannel_logout_supported":true,"jwks_uri":"http://auth.mydomain.com/auth/realms/OPTV/protocol/openid-connect/certs","check_session_imyapp.mydomain.comame":"http://auth.mydomain.com/auth/realms/OPTV/protocol/openid-connect/login-status-imyapp.mydomain.comame.html","grant_types_supported":["authorization_code","implicit","remyapp.mydomain.comesh_token","password","client_credentials","urn:ietf:params:oauth:grant-type:device_code","urn:openid:params:grant-type:ciba"],"acr_values_supported":["0","1"],"response_types_supported":["code","none","id_token","token","id_token token","code id_token","code token","code id_token token"],"subject_types_supported":["public","pairwise"],"id_token_signing_alg_values_supported":["PS384","ES384","RS384","HS256","HS512","ES256","RS256","HS384","ES512","PS256","PS512","RS512"],"id_token_encryption_alg_values_supported":["RSA-OAEP","RSA-OAEP-256","RSA1_5"],"id_token_encryption_enc_values_supported":["A256GCM","A192GCM","A128GCM","A128CBC-HS256","A192CBC-HS384","A256CBC-HS512"],"userinfo_signing_alg_values_supported":["PS384","ES384","RS384","HS256","HS512","ES256","RS256","HS384","ES512","PS256","PS512","RS512","none"],"userinfo_encryption_alg_values_supported":["RSA-OAEP","RSA-OAEP-256","RSA1_5"],"userinfo_encryption_enc_values_supported":["A256GCM","A192GCM","A128GCM","A128CBC-HS256","A192CBC-HS384","A256CBC-HS512"],"request_object_signing_alg_values_supported":["PS384","ES384","RS384","HS256","HS512","ES256","RS256","HS384","ES512","PS256","PS512","RS512","none"],"request_object_encryption_alg_values_supported":["RSA-OAEP","RSA-OAEP-256","RSA1_5"],"request_object_encryption_enc_values_supported":["A256GCM","A192GCM","A128GCM","A128CBC-HS256","A192CBC-HS384","A256CBC-HS512"],"response_modes_supported":["query","myapp.mydomain.comagment","form_post","query.jwt","myapp.mydomain.comagment.jwt","form_post.jwt","jwt"],"registration_endpoint":"http://auth.mydomain.com/auth/realms/OPTV/clients-registrations/openid-connect","token_endpoint_auth_methods_supported":["private_key_jwt","client_secret_basic","client_secret_post","tls_client_auth","client_secret_jwt"],"token_endpoint_auth_signing_alg_values_supported":["PS384","ES384","RS384","HS256","HS512","ES256","RS256","HS384","ES512","PS256","PS512","RS512"],"introspection_endpoint_auth_methods_supported":["private_key_jwt","client_secret_basic","client_secret_post","tls_client_auth","client_secret_jwt"],"introspection_endpoint_auth_signing_alg_values_supported":["PS384","ES384","RS384","HS256","HS512","ES256","RS256","HS384","ES512","PS256","PS512","RS512"],"authorization_signing_alg_values_supported":["PS384","ES384","RS384","HS256","HS512","ES256","RS256","HS384","ES512","PS256","PS512","RS512"],"authorization_encryption_alg_values_supported":["RSA-OAEP","RSA-OAEP-256","RSA1_5"],"authorization_encryption_enc_values_supported":["A256GCM","A192GCM","A128GCM","A128CBC-HS256","A192CBC-HS384","A256CBC-HS512"],"claims_supported":["aud","sub","iss","auth_time","name","given_name","family_name","preferred_username","email","acr"],"claim_types_supported":["normal"],"claims_parameter_supported":true,"scopes_supported":["openid","roles","profile","microprofile-jwt","address","email","offline_access","acr","phone","web-origins"],"request_parameter_supported":true,"request_uri_parameter_supported":true,"require_request_uri_registration":true,"code_challenge_methods_supported":["plain","S256"],"tls_client_certificate_bound_access_tokens":true,"revocation_endpoint":"http://auth.mydomain.com/auth/realm
2022/09/24 20:15:16 [debug] 3150#3150: *1 [lua] openidc.lua:105: openidc_cache_set(): cache set: success=true err=nil forcible=false
2022/09/24 20:15:16 [debug] 3150#3150: *1 [lua] openidc.lua:665: openidc_get_token_auth_method(): 1 => private_key_jwt
2022/09/24 20:15:16 [debug] 3150#3150: *1 [lua] openidc.lua:665: openidc_get_token_auth_method(): 2 => client_secret_basic
2022/09/24 20:15:16 [debug] 3150#3150: *1 [lua] openidc.lua:665: openidc_get_token_auth_method(): 3 => client_secret_post
2022/09/24 20:15:16 [debug] 3150#3150: *1 [lua] openidc.lua:667: openidc_get_token_auth_method(): configured value for token_endpoint_auth_method (client_secret_post) found in token_endpoint_auth_methods_supported in metadata
2022/09/24 20:15:16 [debug] 3150#3150: *1 [lua] openidc.lua:695: openidc_get_token_auth_method(): token_endpoint_auth_method result set to client_secret_post
2022/09/24 20:15:16 [debug] 3150#3150: *1 [lua] openidc.lua:1542: authenticate(): Authentication is required - Redirecting to OP Authorization endpoint
2022/09/24 20:16:15 [info] 3150#3150: *2 client timed out (110: Connection timed out) while waiting for request, client: xxxxx, server: 0.0.0.0:443

What I’m doing wrong ? BAD REQUEST indicates surely that there is a missing parameter or a misconfiguration but I can’t figure it out.

Thank you

This is my helm chart values file (I’m using a single pod for now but I’ve enabled autoscaling with 2 pods and it works perfectly):

replicas: 1

image:
  repository: quay.io/keycloak/keycloak
  tag: "19.0.2"
  pullPolicy: IfNotPresent

extraVolumes: |
  - name: cache-ispn
    configMap:
      name: cache-ispn-kubeping-cm
  - name: host-timezone
    hostPath:
      path: /etc/localtime

extraVolumeMounts: |
  - name: cache-ispn
    mountPath: /opt/keycloak/conf/cache-ispn-kubeping.xml
    subPath: cache-ispn-kubeping.xml
  - name: host-timezone
    mountPath: /etc/localtime

restartPolicy: Always

rbac:
  create: true
  rules:
    # RBAC rules for KUBE_PING
    - apiGroups:
        - ""
      resources:
        - pods
      verbs:
        - get
        - list

command:
  - "/opt/keycloak/bin/kc.sh"
  - "--verbose"
  - "start"
  - "--auto-build"
  - "--http-enabled=true"
  - "--http-port=8080"
  - "--hostname-strict=false"
  - "--hostname-strict-https=false"
  - "--hostname-strict-backchannel=false"
  - "--spi-events-listener-jboss-logging-success-level=info"
  - "--spi-events-listener-jboss-logging-error-level=warn"
#  - "--spi-login-protocol-openid-connect-legacy-logout-redirect-uri=true"
#  - "--spi-sticky-session-encoder-infinispan-should-attach-route=false"

extraEnv: |
  - name: KEYCLOAK_ADMIN
    valueFrom:
      secretKeyRef:
        name: {{ include "keycloak.fullname" . }}-admin-creds
        key: user
  - name: KEYCLOAK_ADMIN_PASSWORD
    valueFrom:
      secretKeyRef:
        name: {{ include "keycloak.fullname" . }}-admin-creds
        key: password
  - name: JAVA_OPTS_APPEND
    value: >-
      -XX:+UseContainerSupport
      -XX:MaxRAMPercentage=50.0
      -Djava.awt.headless=true
      -Dkubeping_namespace={{ .Release.Namespace }}
      -Dkubeping_label="keycloak-cluster=default"
      -Djgroups.dns.query={{ include "keycloak.fullname" . }}-headless
  - name: KC_CACHE_CONFIG_FILE
    value: cache-ispn-kubeping.xml
  - name: KC_LOG_LEVEL
    value: INFO

affinity: |
  podAntiAffinity:
    preferredDuringSchedulingIgnoredDuringExecution:
      - weight: 100
        podAffinityTerm:
          labelSelector:
            matchExpressions:
              - key: app
                operator: In
                values:
                  - keycloak
          topologyKey: failure-domain.beta.kubernetes.io/zone

nodeSelector:
  tools-install: enabled

podLabels:
  app: keycloak

podDisruptionBudget:
  minAvailable: 1

secrets:
  admin-creds:
    stringData:
      user: admin
      password: '--my-admin-password--'

service:
  type: NodePort

ingress:
  enabled: true
  servicePort: http
  annotations:
    # openstack ingress required annotation to access the ELB
    kubernetes.io/ingress.class: 'cce'
    kubernetes.io/elb.id: 'xxxxxxxxxxxxxx'
    kubernetes.io/elb.ip: 'xxxxxxxxxxx'
    kubernetes.io/elb.subnet-id: 'xxxxxxxxxx'
    kubernetes.io/elb.port: '80'
  rules:
    - host: 'keycloak.local'
      paths:
        - path: /
          pathType: Prefix
    - host: 'auth.mydomain.com
      paths:
        - path: /
          pathType: Prefix
  tls: []

database:
  vendor: postgres
  hostname: xxxxxxx
  port: 5432
  database: keycloak
  username: keycloak
  password: '---------'

http:
  relativePath: "/auth"

serviceAccount:
  create: true
  allowReadPods: true

cache:
  stack: default

proxy:
  enabled: true
  mode: edge

metrics:
  enabled: false

health:
  enabled: true

autoscaling:
  enabled: false

I didn’t see the option --proxy=edge anywhere in your configs. Maybe you should try that out. Using a reverse proxy - Keycloak

This is already the case as I’ve set the proxy to edge:

            {{- if .Values.proxy.enabled }}
            - name: KC_PROXY
              value: {{ .Values.proxy.mode }}
            {{- end }}

Id didn’t notice it before, but I see this log on the logs:

2022-09-25 20:12:11,274 WARN  [org.keycloak.protocol.oidc.utils.AcrUtils] (executor-thread-3) Invalid realm configuration (ACR-LOA map)

May be this is related and I’m wondering how to correct this error as on my realm settings the ACR to LoA Mapping is empty (no key/value mapping)

This is the stacktrace when activating the TRACE log level:

2022-09-26 10:52:12,610 TRACE [org.keycloak.events] (executor-thread-0) type=LOGIN_ERROR, realmId=XXXXXXXX-XXX-XXXX-XXXX-XXXXXXXXXXXX, clientId=null, userId=null, ipAddress=XX.XX.XX.XX, error=invalid_request, requestUri=http://auth.mydomain.com/auth/realms/myrealm/protocol/openid-connect/auth, stackTrace=
    org.keycloak.events.log.JBossLoggingEventListenerProvider.logEvent(JBossLoggingEventListenerProvider.java:114)
    org.keycloak.events.EventListenerTransaction.commitImpl(EventListenerTransaction.java:62)
    org.keycloak.models.AbstractKeycloakTransaction.commit(AbstractKeycloakTransaction.java:48)
    org.keycloak.services.DefaultKeycloakTransactionManager.commit(DefaultKeycloakTransactionManager.java:146)
    org.keycloak.quarkus.runtime.integration.web.QuarkusRequestFilter.close(QuarkusRequestFilter.java:148)
    org.keycloak.quarkus.runtime.integration.web.QuarkusRequestFilter.lambda$configureEndHandler$2(QuarkusRequestFilter.java:114)
    io.vertx.ext.web.impl.RoutingContextImpl.lambda$null$0(RoutingContextImpl.java:545)
    io.vertx.ext.web.impl.SparseArray.forEachInReverseOrder(SparseArray.java:40)
    io.vertx.ext.web.impl.RoutingContextImpl.lambda$getHeadersEndHandlers$1(RoutingContextImpl.java:545)
    io.vertx.core.http.impl.Http1xServerResponse.prepareHeaders(Http1xServerResponse.java:704)
    io.vertx.core.http.impl.Http1xServerResponse.end(Http1xServerResponse.java:408)
    io.vertx.core.http.impl.Http1xServerResponse.end(Http1xServerResponse.java:388)
    io.quarkus.resteasy.runtime.standalone.VertxBlockingOutput.write(VertxBlockingOutput.java:91)
    io.quarkus.resteasy.runtime.standalone.VertxHttpResponse.writeBlocking(VertxHttpResponse.java:172)
    io.quarkus.resteasy.runtime.standalone.VertxOutputStream.close(VertxOutputStream.java:126)

Could someone tell me what parameter(s) is(are) missing from the curl command ? I thing that there is something missing from lua-resty-openidc:

curl -v  --get \
  --data-urlencode "client_id=my-app" \
  --data-urlencode "redirect_uri=/auth/redirect_uri" \
  --data-urlencode "scope=openid" \
  --data-urlencode "response_type=code" \
  --data-urlencode "state=947336569953640f930d73aeef06d550" \
  --data-urlencode "nonce=6acd3b471b99a5109db655c1816b2408" \
  https://auth.mydomain.com/auth/realms/myrealm/protocol/openid-connect/auth

Thank you

Well, all the examples so far are based on Keycloak 15 or 16. I think that I’m missing a realm or client configuration :frowning:

After further digging, it seems that my nginx configuration was wrong :frowning: I’ve simplified it and now it’s perfectly working :smiley:

server {
  listen 80;
  server_name auth.mydomain.com;;

  return 301 https://$host$request_uri;
}

server {
  listen 443 ssl;
  server_name auth.mydomain.com;

  ssl_certificate     /etc/ssl/private/mydomain.crt;
  ssl_certificate_key /etc/ssl/private/mydomain.key;

  access_log /usr/local/openresty/nginx/logs/keycloak_access.log;
  error_log  /usr/local/openresty/nginx/logs/keycloak_error.log;

  location /auth/resources/ {
    include /etc/openresty/conf.d/inc/proxy.conf;
    proxy_pass http://upstream_ingress; # kubernetes ingress
  }

  location /auth/realms/ {
    include /etc/openresty/conf.d/inc/proxy.conf;
    proxy_pass http://upstream_ingress; # kubernetes ingress
  }

  location /auth/js/ {
    include /etc/openresty/conf.d/inc/proxy.conf;
    proxy_pass http://upstream_ingress; # kubernetes ingress
  }

  error_page 500 502 503 504 /50x.html;

  location = /50x.html {
    root /usr/local/openresty/nginx/html;
  }
}

The proxy include file is:

client_body_buffer_size 128k;

#Timeout if the real server is dead
proxy_next_upstream error timeout invalid_header http_500 http_502 http_503;

# Advanced Proxy Config
send_timeout 5m;
proxy_read_timeout 360;
proxy_send_timeout 360;
proxy_connect_timeout 360;

# Basic Proxy Config
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Host $http_host;
proxy_set_header X-Forwarded-Uri $request_uri;
proxy_set_header X-Forwarded-Ssl on;
proxy_redirect  http://  $scheme://;
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_cache_bypass $cookie_session;
proxy_no_cache $cookie_session;
proxy_buffers 64 256k;
proxy_buffer_size 128k;
proxy_busy_buffers_size 256k;