JBoss Clustering Architecture – Distributed Replicant Manager

My understanding of Distributed Replicant Manager (DRM) is that it allows you to attach some serialized data (stub) to a cluster node and manage it.

Examples of this data include the list of stubs for a given RMI server. Each node has a stub to share with other nodes. The DRM enables the sharing of these stubs in the cluster, and allows one to know which node each stub belongs to.

In case one of the nodes leaves the cluster, its stub is automatically removed from the list of replicants (stubs) that DRM maintains.

Also for each set of replicants DRM holds an id, which is identical on all nodes in the cluster.

I used DRM to attach a replicant to a node in the cluster. The replicant contains a String which holds a node IP that i get from jboss.bind.address property. Every time my cluster is going through a topology changes, my service bean on the master node will prints out replicant list.

My service MBean as follows:

package com.example;

public class CoordinatorHAService extends
	HASingletonSupport implements
			CoordinatorHAServiceMBean  {

    private static Logger logger =
	Logger.getLogger(CoordinatorHAService.class);

	/**
	 * Can be used instead of the constructor
	 * defined in jboss-service.xml
	 */
	private static final String BIND_ADDRESS =
		System.getProperty("jboss.bind.address");

	/**
	 * Custom name for this HA partition, overrides
	 * the default HA partition name which is
	 * canonical name of this MBean
	 */
	private final static
	String COORDINATOR_HA_SERVICE_NAME =
		"ServiceName:CoordinatorHAService";

	/**
	 * Current node IP
	 */
	private String nodeIp = null;

	/**
	 * Constructor that gets value of this node IP
	 * from 'jboss.bind.address' property
	 * defined in jboss-service.xml
	 *
	 */
	public CoordinatorHAService(String nodeIp) {
		this.nodeIp = nodeIp;

	}

	public void startService() throws Exception {

	try {
	/**
	 * Call for super must be done before getting
	 * HAPartition. If super is not called, HAPartition
	 * will be 'null'
	 *
	 * Alternatively you can use InitialContext to get
	 * default partition, then you dont have to make
	 * a call for super.startService(). I havent
	 * tested it, so I am not sure of the results.
	 *
	 *
	 * InitialContext ic = new InitialContext();
	 * String partitionName = ServerConfigUtil.getDefaultPartitionName();
	 * String partitionJndi = "/HAPartition/" + partitionName;
	 * HAPartition partition = (HAPartition) ic.lookup(partitionJndi);
	 */
		super.startService();

	/**
	 * HAPartition gives access to the replicant manager,
	 * which I am using to add node replicants to.
	 */
		HAPartition partition = super.getPartition();

		if (partition != null) {
			partition.getDistributedReplicantManager().add(
					this.getServiceHAName(), this.getNodeIp());
		}

		} catch (Exception e) {
			this.stopService();
		}

	}

	synchronized public void stopService() throws Exception {
		super.stopService();
	}

	public boolean isMasterNode() {
		return super.isMasterNode();
	}

	/**
	 * Called when node is elected as a master node
	 */
	public void startSingleton() {

	}

	/**
	 * Called when node stops acting as a master node
	 */
	public void stopSingleton() {

	}

	/**
	 * Override this method only if you need to provide
	 * a custom partition wide unique service name.
	 * The default implementation will usually work,
	 * provided that the getServiceName() method returns
	 * a unique canonical MBean name.
	 *
	 */
	public String getServiceHAName() {
		// return super.getServiceHAName();
		return CoordinatorHAService.COORDINATOR_HA_SERVICE_NAME;
	}

	/**
	 * Called when there are topology changes in the cluster.
	 */
	public void partitionTopologyChanged(List newReplicants,
												int newViewID) {
		super.partitionTopologyChanged(newReplicants,
												newViewID);

		/**
		 * If current service is the master node -
		 * print replicants in the cluster
		 */
		if (this.isMasterNode()) {

			List clusterNodeIps =
				new LinkedList(newReplicants);

			for (String clusterNodeIp : clusterNodeIps) {
				logger.info("Replicant IP: " + clusterNodeIp);
			}
		}

	}

	/**
	 * Gets current node's IP
	 */
	public String getNodeIp() {
		return nodeIp;
	}
}

..and this is my jboss-service.xml below:

<?xml version="1.0" encoding="UTF-8"?>

<server>
    <mbean
	code="com.example.CoordinatorHAService"
	name="com.example:service=CoordinatorHAService">
	<constructor>
	<arg type="java.lang.String"
		value="${jboss.bind.address:127.0.0.1}" />
	  </constructor>
     </mbean>
</server>

In JBoss clustering architecture, DRM sits on top of HAPartition which abstracts the communication framework.

Maybe this is not the most right way to do it, but I do find DRM useful if I want to manage information about my cluster nodes, and I dont want to rely on HAPartition to provide me with such information.

The moment a new node joins the cluster, I insert its IP to my set of replicants. So I always have accurate and recent information about nodes in my cluster. In case of a dead node, my list of replicants gets updated with IP of the dead node removed from the list. The source code is attached to this post or you can just do copy/paste if you feel like it.

JBoss clustering is a very big topic. So if someone is interested, JBoss group provides a PDF book called JBoss AS Clustering on their website, its really useful and easy to understand. It walks through and talks about fundamental concepts of clustering. The book is a bit long but worth while to have a look ;)

Any comments or/and flames about this post are welcomed …

cheers

jboss clustering replicant manager sourcecode

JBoss Clustering – Shared State Across Cluster Partition

Did you know that if you have a JBoss cluster, HA singletons service beans on each can share a common memory state? State is a memory space shared by all HA singletons service beans in a cluster. It is possible save an object to the state using HA singleton service bean on one node, and to retrieve that object on another node.

The implementation is very easy. Lets assume you have two nodes in a cluster, on each node you have HA singleton service bean running. Lets call them “service A” and “service B“. Your service beans should extend from HASingletonSupport class. HASingletonSupport in its turn extends from HAServiceMBeanSupport. The latter gives you access to two methods:

public void setDistributedState(String key, Serializable value) throws Exception

and

public Serializable getDistributedState(String key)

These are convenience which allow you to save and retrieve objects from shared state. It can be whatever you want – primitive data types or even POJOs.

Lets say somewhere inside you service A, you have the following method:

public void setObjectToSharedState()  {
	String key = "test-1";
	String value = "Testing memory state";

 	try {
		this.setDistributedState(key, value);
		}
	catch (Exception e)  {
		e.printStackTrace();
		}
}

Now, what has happened is that you, by using your service A saved into the shared memory a String value, under a key name “test-1“. This key name you can use in your service B to retrieve its corresponding value from the shared memory, in this case a String. Lets say the following method is sitting somewhere in your service B bean, on another node in the cluster:

	public void getObjectFromSharedState()  {
		String key = "test-1";
		String value = (String) this.getDistributedState(key);
		System.out.println("The value is: " + value);
	}

By specifying a key name, you can retrieve back from the shared memory what ever is saved under that key. You just have to know the type to cast it back to.
Also keep in mind: if you specify wrong key name, the retrieved object will be null.

Shared memory state is another option, if you want to share something between your nodes, and for some reason you don’t want to use JBoss cache.

JBoss Clustering – How Many Nodes in the Cluster?

If you want to know how many nodes there are in the current cluster partition, all you have to do is to ask HAPartition for the node list. HAPartition represents your cluster partition, and it contains all the information you need to know about your cluster and the nodes: their host names, IPs, position in the cluster view.

Lets assume you have a service bean that extends from HASingletonSupport. HASingletonSupport in its turn extends from HAServiceMBeanSupport.

HAServiceMBeanSupport is the one who gives you access to HAPartition object.

The code to request for HAPartition object and node list that you see below , you can put somewhere in your service bean:

	HAPartition partition = getPartition();
	ClusterNode[] nodes = partition.getClusterNodes();
	System.out.println(nodes.length);

ClusterNode object represents your node in the cluster. It contains information about node’s host name, its internet address and a few more things. getClusterNodes(), returns to you an array contains as many ClusterNode objects as you have currently in your cluster. So by getting the value of array length, you will know how many nodes your cluster has.

Another way, is to do practically the same, but to request from a HAPartition a current view of your cluster:

	HAPartition partition = getPartition();Vector v = partition.getCurrentView();

	System.out.println(partition.getCurrentView().size());

	for (Object o : v) {
		System.out.println(o.toString());
	}

The view, which is a Vector contains information about node sockets. When printed, it will return to you a String representation of node ip + port: xxx.xxx.xxx.xxx:port. Also by printing size of the Vector, you will get number of nodes in the cluster.

Important note:
I noticed there is some delay happening from the time when node leaves the cluster to the time when HAPartition returns an updated view. In another words – after node has left the cluster and topology change has occurred, the HAPartition may return to you an old view still containing the dead node. So be careful.

Also, getPartition() may return null, if super.startService() hasnt been called. Have a look at implementation of HAServiceMBeanSupport and my other post JBoss Clustering – HASingleton service.

Thats it :)

Stateless Beans and Annotations

Since EJB 3.0, it is possible to use JDK 5.0 metadata annotations to create EJB 3.0 Java beans. This makes the development very easy. The only drawback here as I see it, that in case when you want to change/add/remove annotation you actually have to recompile the class.

The example below shows how to create a stateless enterprise Java bean using annotations. The bean implements remote interface.

The interface:

	package com.test.stateless.interfaces;

	import javax.ejb.Remote;

	@Remote
         public interface StatelessTestRemote {
		public void doSomething();
	}

Annotation “Remote” specifies that the class is remote interface of the bean.

The bean:


package com.test.stateless.beans;
import javax.ejb.Stateless;
import org.jboss.annotation.ejb.RemoteBinding;
import com.test.stateless.interfaces.StatelessTestRemote;

@Stateless
@RemoteBinding(jndiBinding = &quot;com/test/stateless/beans/StatelessTestBean/remote&quot;)

	public class StatelessTestBean implements StatelessTestRemote {

	public void doSomething()  {

	}
}

Annotation “Stateless” specifies that the class is stateless bean.
Annotation “RemoteBinding” specifies JNDI name for the interface.

Keep in mind, that you can use @Remote in implementation bean itself, and it doesnt have to be inside the interface class. For example:

@Stateless@Remote ({StatelessTestRemote.class})
@RemoteBinding(jndiBinding = &quot;com/test/stateless/beans/StatelessTestBean/remote&quot;)

public class StatelessTestBean implements StatelessTestRemote {

Thats it :)

Update:
As Laird Nelson has pointed out in his response to this post, it is possible to override annotations in the XML descriptor. I looked into it, and yes indeed – EJB 3.0 allows to override the behavior of annotations in the source code. Although there are some limitations which annotations can be overridden. You can refer to article JBoss EJB 3.0 partial deployment descriptors to find more detailed explanation about it :)

JBoss Clustering – HASingleton Service

Have you ever dealt with clustered singleton service? How to determine which cluster node is the master? Well, if I am the current node, I can simply ask whether I am the master or not. But what if I already know that the current node is not the master, and I want to determine which node among other nodes in the cluster is the master?

First I would like to give brief summary about HASingleton service (HA stands for High Availability).

Summary:
HASingleton service, is a service that is deployed on every node in a cluster, but runs only on one node, while the other nodes remain passive. The node that the service runs on, is the master node.

How does JBoss selects the master node?
Well the first node in a cluster will become master node. If existing master node will leave the cluster as a result of a shutdown for example, another node is selected as master from the remaining nodes.

Master node can control which tasks will get executed, and how many times. HASingletons also have the ability to share a memory state across clustered partition. Something like caching …

Solution:
Lets assume that I have a service bean that extends from HASingletonSupport class. HASingletonSupport in its turn extends from HAServiceMBeanSupport
and implements two interfaces: HASingletonMBean and HASingleton. All of them give me those wonderful APIs that can tell me whether the current node is the master or not, what the status of my cluster, how many nodes etc. etc.

public class MyHAService extends HASingletonSupport implements
        MyHAServiceMBean {

private static Logger logger =
		Logger.getLogger(MyHAService.class);

 public void startService() throws Exception {
        logger.info(" *** STARTED MY SINGLETON SERVICE *** ");
        super.startService();
 }

 public void stopService() throws Exception {
        logger.info(" *** STOPPED MY SINGLETON SERVICE *** ");
        super.stopService();
 }

public boolean isMasterNode() {
        return super.isMasterNode();
}

public void startSingleton() {
        logger.info(" *** CURRENT NODE IP:"
                + this.getPartition().getClusterNode()
			.getIpAddress().getHostAddress() +
			" ELECTED AS A MASTER NODE *** ");
}

public void stopSingleton() {
        logger.info(" *** CURRENT NODE IP:"
                + this.getPartition().getClusterNode()
			.getIpAddress().getHostAddress()
                + " STOPPED ACTING AS A MASTER NODE *** ");
}

public void partitionTopologyChanged(List newReplicants, int newViewID) {
        logger.info(" *** TOPOLOGY CHANGE STARTING *** ");
        super.partitionTopologyChanged(newReplicants, newViewID);

   }
}

startSingleton() – invoked when the current node elected as a master.
stopSingleton() – invoked when the current node stops acting as a master.
partitionTopologyChanged() – invoked when new node joins or leaves the cluster.

As i mentioned before, I have the ability to know whether the current node is the master node, by calling isMasterNode(). The method will return true if the node is master and false if its not.

In case I already know that the current node is not the master, I can ask the clustered partition (the cluster) which node is the master. For example I can request the current view of my cluster.

The implementation can be similar to the method below which you can have inside your service bean:

private String getMasterSocket() {

        HAPartition partition = this.getPartition();

        if (partition != null) {

            if (partition.getCurrentView() != null) {
                return partition.getCurrentView().get(0).toString();

            } else {
                return null;
            }
        } else {
            return null;
        }
    }

The method above will return me a string contains port and ip of the master node, for example:

192.168.62.12:1099

The HAPartition service maintains across cluster a registry of nodes in a view order. Now, keep in mind that an order of the nodes in the view, does not necessary reflect the order nodes have joined the cluster.

So the first node in the view as you can see beloew, will be the master node.
Simple as that.

return partition.getCurrentView().get(0).toString();

Please note:
Method getPartition() may return null, if super.startService() hasnt been called. Have a look at implementation of HAServiceMBeanSupport and my other post JBoss Clustering – How many nodes in the cluster?.

Deployment of MBean Separately to Its Interface

Few days ago i came across a little nasty thing during mbean deployment. What I did was separation of my mbean class and its interface in to two archives. So first i would deploy an archive contained my interfaces and then i would deploy an archive contained my bean classes.

Why did i do it this way?

Well to minimize chances having ClassCastException. Since JBoss creates a proxy to the bean from its interface. Having the interfaces deployed seperately from the bean itself, allows me easily to modify business logic in the bean (if needed). Therefore, I will have to redeploy only the bean itself, without the need to redeploy also the interface, which will not affect my proxies to the bean in the system.

To my big surprise i got an exception:

	org.jboss.deployment.DeploymentException: Class does not expose a management interface:
	java.lang.Object; - nested throwable:
	(javax.management.NotCompliantMBeanException:Class does not expose a management interface: java.lang.Object)

I could not understand where did I go wrong. I had my mbean class:

public class MyService 	extends ServiceMBeanSupport implements MyServiceMBean {

	public void startService() throws Exception {
		...
	}

	public void stopService() {
		...
	}
}

I had my interface with extension ‘MBean’. The interface must have this extension, otherwise, you will receive a Class does not expose a management interface exception :

	public interface MyServiceMBean extends ServiceMBean {	...}

I had my jboss-service.xml:

	<?xml version="1.0" encoding="UTF-8"?>
		<server>
			<mbean code="com.example.MyService" name="com.example:service=MyService">
			</mbean>
		</server>

Finally, i discovered that it was because of the way i did the packaging. If you ever going to package mbean and its interface in two separate archives, they (mbean and its interface) must sit under the same package name!

For example: if I in archive A, put my mbean class under the package name “com.example.test“, then in archive B I have to put its interface also under “com.example.test“.