In it's simplest form, a RMI load balancer would be a just another RMI server which would have references/handles to other RMI servers providing a service. As an example, let's say you have an adder service:

public interface Adder {
  int add(int a, int b);
}

Along with having RMI servers which implement the above interface, you will also make your "new" load balancer RMI service implement the above interface. This way, the client doesn't care if it's directly hitting the adder service or a load balancer.

When the load balancer starts up, it initializes all the "worker" RMI services (basically references to one-or-more of your adder services) in a loop based on some predefined (or configurable) worker count. When a request comes in, you check which of the adder services are free and dispatch a request to it and wait for the result. If none of the servers are free, you may either "queue" the request in the balancer or send the request to a non-free service anyways and let the RMI service concurrency handling mechanism take care of the queuing.

The devil is in the details though...

This article has been dead for over six months. Start a new discussion instead.