Unfortunately, this basic event listener does not catch syncs from LDAP.
I just found the following in the logs:
INFO [org.keycloak.storage.ldap.LDAPStorageProviderFactory] (Timer-2) Sync all users from LDAP to local store: realm: reha-plan, federation provider: test
INFO [org.keycloak.storage.ldap.LDAPStorageProviderFactory] (Timer-2) Sync all users finished: 2 imported users, 0 updated users
My goal is to catch these events and perform an external API call.
Does someone have a suggestion on how to implement this use case?
AFAICT there are no events fired after LDAP synchronization at the moment (KC 9.0.0). I’d try to implement a custom provider inheriting from LDAPStorageProviderFactory which overrides the org.keycloak.storage.ldap.LDAPStorageProviderFactory#syncImpl method. With this you can run your custom logic.
It might also be enough to override org.keycloak.storage.ldap.LDAPStorageProviderFactory#sync and org.keycloak.storage.ldap.LDAPStorageProviderFactory#syncSince.
Do you know which maven dependency I have to include in my Java project in order to override the org.keycloak.storage.ldap.LDAPStorageProvider:snyc method?
I’ve just implemented the idea of using onImportUserFromLDAP following that article (thanks, btw ) . however I am facing a kind of funny situation. Because I am running several replicas, when a user is updated, I get one message/event for each replica, because the LDAP federation sync is run in each replicate. I hadn’t realized this before ( Keycloack 16 , but I do not think here the version matters )
When using the console I create a user, the event is just one, because you use one of the instances. But the federation replication runs in each replica independently.
It might be the solution to this is something trivial, but at this moment I am a bit scratching my head about this.
I have a solution
Yes, each replica will run the Federation LDAP sync, and, of course, will trigger the corresponding event.
Now, I was having this problem because I was deployong the two replicas in the k8s cluster at the same time, all replicas were created at the same time.
Now , if “spread” the replicas in time, what will happen is when one of the replicas sync period kicks in , it will update the data, but , for the next replica the data is already updated so no update/create event will happen.
So, basically, the solution is not to have all the replicas created at the same time