JBoss Clustering Architecture – Distributed Replicant Manager

My understanding of Distributed Replicant Manager (DRM) is that it allows you to attach some serialized data (stub) to a cluster node and manage it.

Examples of this data include the list of stubs for a given RMI server. Each node has a stub to share with other nodes. The DRM enables the sharing of these stubs in the cluster, and allows one to know which node each stub belongs to.

In case one of the nodes leaves the cluster, its stub is automatically removed from the list of replicants (stubs) that DRM maintains.

Also for each set of replicants DRM holds an id, which is identical on all nodes in the cluster.

I used DRM to attach a replicant to a node in the cluster. The replicant contains a String which holds a node IP that i get from jboss.bind.address property. Every time my cluster is going through a topology changes, my service bean on the master node will prints out replicant list.

My service MBean as follows:

package com.example;

public class CoordinatorHAService extends
	HASingletonSupport implements
			CoordinatorHAServiceMBean  {

    private static Logger logger =
	Logger.getLogger(CoordinatorHAService.class);

	/**
	 * Can be used instead of the constructor
	 * defined in jboss-service.xml
	 */
	private static final String BIND_ADDRESS =
		System.getProperty("jboss.bind.address");

	/**
	 * Custom name for this HA partition, overrides
	 * the default HA partition name which is
	 * canonical name of this MBean
	 */
	private final static
	String COORDINATOR_HA_SERVICE_NAME =
		"ServiceName:CoordinatorHAService";

	/**
	 * Current node IP
	 */
	private String nodeIp = null;

	/**
	 * Constructor that gets value of this node IP
	 * from 'jboss.bind.address' property
	 * defined in jboss-service.xml
	 *
	 */
	public CoordinatorHAService(String nodeIp) {
		this.nodeIp = nodeIp;

	}

	public void startService() throws Exception {

	try {
	/**
	 * Call for super must be done before getting
	 * HAPartition. If super is not called, HAPartition
	 * will be 'null'
	 *
	 * Alternatively you can use InitialContext to get
	 * default partition, then you dont have to make
	 * a call for super.startService(). I havent
	 * tested it, so I am not sure of the results.
	 *
	 *
	 * InitialContext ic = new InitialContext();
	 * String partitionName = ServerConfigUtil.getDefaultPartitionName();
	 * String partitionJndi = "/HAPartition/" + partitionName;
	 * HAPartition partition = (HAPartition) ic.lookup(partitionJndi);
	 */
		super.startService();

	/**
	 * HAPartition gives access to the replicant manager,
	 * which I am using to add node replicants to.
	 */
		HAPartition partition = super.getPartition();

		if (partition != null) {
			partition.getDistributedReplicantManager().add(
					this.getServiceHAName(), this.getNodeIp());
		}

		} catch (Exception e) {
			this.stopService();
		}

	}

	synchronized public void stopService() throws Exception {
		super.stopService();
	}

	public boolean isMasterNode() {
		return super.isMasterNode();
	}

	/**
	 * Called when node is elected as a master node
	 */
	public void startSingleton() {

	}

	/**
	 * Called when node stops acting as a master node
	 */
	public void stopSingleton() {

	}

	/**
	 * Override this method only if you need to provide
	 * a custom partition wide unique service name.
	 * The default implementation will usually work,
	 * provided that the getServiceName() method returns
	 * a unique canonical MBean name.
	 *
	 */
	public String getServiceHAName() {
		// return super.getServiceHAName();
		return CoordinatorHAService.COORDINATOR_HA_SERVICE_NAME;
	}

	/**
	 * Called when there are topology changes in the cluster.
	 */
	public void partitionTopologyChanged(List newReplicants,
												int newViewID) {
		super.partitionTopologyChanged(newReplicants,
												newViewID);

		/**
		 * If current service is the master node -
		 * print replicants in the cluster
		 */
		if (this.isMasterNode()) {

			List clusterNodeIps =
				new LinkedList(newReplicants);

			for (String clusterNodeIp : clusterNodeIps) {
				logger.info("Replicant IP: " + clusterNodeIp);
			}
		}

	}

	/**
	 * Gets current node's IP
	 */
	public String getNodeIp() {
		return nodeIp;
	}
}

..and this is my jboss-service.xml below:

<?xml version="1.0" encoding="UTF-8"?>

<server>
    <mbean
	code="com.example.CoordinatorHAService"
	name="com.example:service=CoordinatorHAService">
	<constructor>
	<arg type="java.lang.String"
		value="${jboss.bind.address:127.0.0.1}" />
	  </constructor>
     </mbean>
</server>

In JBoss clustering architecture, DRM sits on top of HAPartition which abstracts the communication framework.

Maybe this is not the most right way to do it, but I do find DRM useful if I want to manage information about my cluster nodes, and I dont want to rely on HAPartition to provide me with such information.

The moment a new node joins the cluster, I insert its IP to my set of replicants. So I always have accurate and recent information about nodes in my cluster. In case of a dead node, my list of replicants gets updated with IP of the dead node removed from the list. The source code is attached to this post or you can just do copy/paste if you feel like it.

JBoss clustering is a very big topic. So if someone is interested, JBoss group provides a PDF book called JBoss AS Clustering on their website, its really useful and easy to understand. It walks through and talks about fundamental concepts of clustering. The book is a bit long but worth while to have a look ;)

Any comments or/and flames about this post are welcomed …

cheers

jboss clustering replicant manager sourcecode

JBoss Clustering – Shared State Across Cluster Partition

Did you know that if you have a JBoss cluster, HA singletons service beans on each can share a common memory state? State is a memory space shared by all HA singletons service beans in a cluster. It is possible save an object to the state using HA singleton service bean on one node, and to retrieve that object on another node.

The implementation is very easy. Lets assume you have two nodes in a cluster, on each node you have HA singleton service bean running. Lets call them “service A” and “service B“. Your service beans should extend from HASingletonSupport class. HASingletonSupport in its turn extends from HAServiceMBeanSupport. The latter gives you access to two methods:

public void setDistributedState(String key, Serializable value) throws Exception

and

public Serializable getDistributedState(String key)

These are convenience which allow you to save and retrieve objects from shared state. It can be whatever you want – primitive data types or even POJOs.

Lets say somewhere inside you service A, you have the following method:

public void setObjectToSharedState()  {
	String key = "test-1";
	String value = "Testing memory state";

 	try {
		this.setDistributedState(key, value);
		}
	catch (Exception e)  {
		e.printStackTrace();
		}
}

Now, what has happened is that you, by using your service A saved into the shared memory a String value, under a key name “test-1“. This key name you can use in your service B to retrieve its corresponding value from the shared memory, in this case a String. Lets say the following method is sitting somewhere in your service B bean, on another node in the cluster:

	public void getObjectFromSharedState()  {
		String key = "test-1";
		String value = (String) this.getDistributedState(key);
		System.out.println("The value is: " + value);
	}

By specifying a key name, you can retrieve back from the shared memory what ever is saved under that key. You just have to know the type to cast it back to.
Also keep in mind: if you specify wrong key name, the retrieved object will be null.

Shared memory state is another option, if you want to share something between your nodes, and for some reason you don’t want to use JBoss cache.

JBoss Clustering – How Many Nodes in the Cluster?

If you want to know how many nodes there are in the current cluster partition, all you have to do is to ask HAPartition for the node list. HAPartition represents your cluster partition, and it contains all the information you need to know about your cluster and the nodes: their host names, IPs, position in the cluster view.

Lets assume you have a service bean that extends from HASingletonSupport. HASingletonSupport in its turn extends from HAServiceMBeanSupport.

HAServiceMBeanSupport is the one who gives you access to HAPartition object.

The code to request for HAPartition object and node list that you see below , you can put somewhere in your service bean:

	HAPartition partition = getPartition();
	ClusterNode[] nodes = partition.getClusterNodes();
	System.out.println(nodes.length);

ClusterNode object represents your node in the cluster. It contains information about node’s host name, its internet address and a few more things. getClusterNodes(), returns to you an array contains as many ClusterNode objects as you have currently in your cluster. So by getting the value of array length, you will know how many nodes your cluster has.

Another way, is to do practically the same, but to request from a HAPartition a current view of your cluster:

	HAPartition partition = getPartition();Vector v = partition.getCurrentView();

	System.out.println(partition.getCurrentView().size());

	for (Object o : v) {
		System.out.println(o.toString());
	}

The view, which is a Vector contains information about node sockets. When printed, it will return to you a String representation of node ip + port: xxx.xxx.xxx.xxx:port. Also by printing size of the Vector, you will get number of nodes in the cluster.

Important note:
I noticed there is some delay happening from the time when node leaves the cluster to the time when HAPartition returns an updated view. In another words – after node has left the cluster and topology change has occurred, the HAPartition may return to you an old view still containing the dead node. So be careful.

Also, getPartition() may return null, if super.startService() hasnt been called. Have a look at implementation of HAServiceMBeanSupport and my other post JBoss Clustering – HASingleton service.

Thats it :)

Deployment of MBean Separately to Its Interface

Few days ago i came across a little nasty thing during mbean deployment. What I did was separation of my mbean class and its interface in to two archives. So first i would deploy an archive contained my interfaces and then i would deploy an archive contained my bean classes.

Why did i do it this way?

Well to minimize chances having ClassCastException. Since JBoss creates a proxy to the bean from its interface. Having the interfaces deployed seperately from the bean itself, allows me easily to modify business logic in the bean (if needed). Therefore, I will have to redeploy only the bean itself, without the need to redeploy also the interface, which will not affect my proxies to the bean in the system.

To my big surprise i got an exception:

	org.jboss.deployment.DeploymentException: Class does not expose a management interface:
	java.lang.Object; - nested throwable:
	(javax.management.NotCompliantMBeanException:Class does not expose a management interface: java.lang.Object)

I could not understand where did I go wrong. I had my mbean class:

public class MyService 	extends ServiceMBeanSupport implements MyServiceMBean {

	public void startService() throws Exception {
		...
	}

	public void stopService() {
		...
	}
}

I had my interface with extension ‘MBean’. The interface must have this extension, otherwise, you will receive a Class does not expose a management interface exception :

	public interface MyServiceMBean extends ServiceMBean {	...}

I had my jboss-service.xml:

	<?xml version="1.0" encoding="UTF-8"?>
		<server>
			<mbean code="com.example.MyService" name="com.example:service=MyService">
			</mbean>
		</server>

Finally, i discovered that it was because of the way i did the packaging. If you ever going to package mbean and its interface in two separate archives, they (mbean and its interface) must sit under the same package name!

For example: if I in archive A, put my mbean class under the package name “com.example.test“, then in archive B I have to put its interface also under “com.example.test“.