JBDC problem with database when scaling down within AWS ECS Fargate

I seem to have a problem with my Keycloak 8.0.1 ECS Fargate cluster:

When ECS scales up from 3 containers to 10, JDBC writes 2 lines of unique information ( own_addr / cluster_name / ping_data ) to a RDS Postgres DB for service discovery and it does this very well with no problems.

However, when it comes to scaling down and a container leaves the cluster it only seems to remove 1 of these lines. This seems to be causing zombie information that the remaining containers are trying to cluster too, causing CPU spikes and eventually containers failing.

Anyone experience this before?