JBoss Clustering Architecture – Distributed Replicant Manager

My understanding of Distributed Replicant Manager (DRM) is that it allows you to attach some serialized data (stub) to a cluster node and manage it.

Examples of this data include the list of stubs for a given RMI server. Each node has a stub to share with other nodes. The DRM enables the sharing of these stubs in the cluster, and allows one to know which node each stub belongs to.

In case one of the nodes leaves the cluster, its stub is automatically removed from the list of replicants (stubs) that DRM maintains.

Also for each set of replicants DRM holds an id, which is identical on all nodes in the cluster.

I used DRM to attach a replicant to a node in the cluster. The replicant contains a String which holds a node IP that i get from jboss.bind.address property. Every time my cluster is going through a topology changes, my service bean on the master node will prints out replicant list.

My service MBean as follows:

package com.example;

public class CoordinatorHAService extends
	HASingletonSupport implements
			CoordinatorHAServiceMBean  {

    private static Logger logger =
	Logger.getLogger(CoordinatorHAService.class);

	/**
	 * Can be used instead of the constructor
	 * defined in jboss-service.xml
	 */
	private static final String BIND_ADDRESS =
		System.getProperty("jboss.bind.address");

	/**
	 * Custom name for this HA partition, overrides
	 * the default HA partition name which is
	 * canonical name of this MBean
	 */
	private final static
	String COORDINATOR_HA_SERVICE_NAME =
		"ServiceName:CoordinatorHAService";

	/**
	 * Current node IP
	 */
	private String nodeIp = null;

	/**
	 * Constructor that gets value of this node IP
	 * from 'jboss.bind.address' property
	 * defined in jboss-service.xml
	 *
	 */
	public CoordinatorHAService(String nodeIp) {
		this.nodeIp = nodeIp;

	}

	public void startService() throws Exception {

	try {
	/**
	 * Call for super must be done before getting
	 * HAPartition. If super is not called, HAPartition
	 * will be 'null'
	 *
	 * Alternatively you can use InitialContext to get
	 * default partition, then you dont have to make
	 * a call for super.startService(). I havent
	 * tested it, so I am not sure of the results.
	 *
	 *
	 * InitialContext ic = new InitialContext();
	 * String partitionName = ServerConfigUtil.getDefaultPartitionName();
	 * String partitionJndi = "/HAPartition/" + partitionName;
	 * HAPartition partition = (HAPartition) ic.lookup(partitionJndi);
	 */
		super.startService();

	/**
	 * HAPartition gives access to the replicant manager,
	 * which I am using to add node replicants to.
	 */
		HAPartition partition = super.getPartition();

		if (partition != null) {
			partition.getDistributedReplicantManager().add(
					this.getServiceHAName(), this.getNodeIp());
		}

		} catch (Exception e) {
			this.stopService();
		}

	}

	synchronized public void stopService() throws Exception {
		super.stopService();
	}

	public boolean isMasterNode() {
		return super.isMasterNode();
	}

	/**
	 * Called when node is elected as a master node
	 */
	public void startSingleton() {

	}

	/**
	 * Called when node stops acting as a master node
	 */
	public void stopSingleton() {

	}

	/**
	 * Override this method only if you need to provide
	 * a custom partition wide unique service name.
	 * The default implementation will usually work,
	 * provided that the getServiceName() method returns
	 * a unique canonical MBean name.
	 *
	 */
	public String getServiceHAName() {
		// return super.getServiceHAName();
		return CoordinatorHAService.COORDINATOR_HA_SERVICE_NAME;
	}

	/**
	 * Called when there are topology changes in the cluster.
	 */
	public void partitionTopologyChanged(List newReplicants,
												int newViewID) {
		super.partitionTopologyChanged(newReplicants,
												newViewID);

		/**
		 * If current service is the master node -
		 * print replicants in the cluster
		 */
		if (this.isMasterNode()) {

			List clusterNodeIps =
				new LinkedList(newReplicants);

			for (String clusterNodeIp : clusterNodeIps) {
				logger.info("Replicant IP: " + clusterNodeIp);
			}
		}

	}

	/**
	 * Gets current node's IP
	 */
	public String getNodeIp() {
		return nodeIp;
	}
}

..and this is my jboss-service.xml below:

<?xml version="1.0" encoding="UTF-8"?>

<server>
    <mbean
	code="com.example.CoordinatorHAService"
	name="com.example:service=CoordinatorHAService">
	<constructor>
	<arg type="java.lang.String"
		value="${jboss.bind.address:127.0.0.1}" />
	  </constructor>
     </mbean>
</server>

In JBoss clustering architecture, DRM sits on top of HAPartition which abstracts the communication framework.

Maybe this is not the most right way to do it, but I do find DRM useful if I want to manage information about my cluster nodes, and I dont want to rely on HAPartition to provide me with such information.

The moment a new node joins the cluster, I insert its IP to my set of replicants. So I always have accurate and recent information about nodes in my cluster. In case of a dead node, my list of replicants gets updated with IP of the dead node removed from the list. The source code is attached to this post or you can just do copy/paste if you feel like it.

JBoss clustering is a very big topic. So if someone is interested, JBoss group provides a PDF book called JBoss AS Clustering on their website, its really useful and easy to understand. It walks through and talks about fundamental concepts of clustering. The book is a bit long but worth while to have a look ;)

Any comments or/and flames about this post are welcomed …

cheers

jboss clustering replicant manager sourcecode

Java Generics and Reflection

Hi, the other day I had a situation, where in my code at run time I had to determine the super type of the Class that I obtained. To sub type my classes, I used generics.

In case my obtained Class was of expected super type, I had to invoke a super class static method on the subclass Class using reflection.

I used these concepts and prepared a short tutorial, I also attached source files to this post in case someone wants to download them. What I did was I created an abstract class, and an extending child class using generics. In my test client, I determine the super type of my child class at run time and then I invoke a static method using reflection.

Reflection also allows to invoke methods, constructors and change value of fields that are private. It is often called “reflection attack”. In my post Hack any Java class using reflection attack, I give several examples about reflection attack.

Ok, back to the topic now:

My abstract parent class:

import java.io.Serializable;
import java.util.Date;

public abstract class AbstractParentEntity
		<E extends AbstractParentEntity>
				implements Serializable {

    private static final
	long serialVersionUID = 5419258598746186610L;

	public static Long getSomeLongValue() {
		return new Long(999);
	}

}

My extending child class:

import java.io.Serializable;

public class ChildEntity extends
	AbstractParentEntity<ChildEntity>
			implements Serializable {

     private static final long
	serialVersionUID = -2271176823058287608L;

	private String name;

	public ChildEntity()  {

	}
}

My test client:

import java.lang.reflect.InvocationTargetException;
import java.lang.reflect.Method;
import java.lang.reflect.Type;

public class Test {

	//canonical name of my child class
	private static final String
		CHILD_CLASS_NAME = "ChildEntity";
	//name of the static method i am going to invoke
	private static final String
		PARENT_METHOD = "getSomeLongValue";

   public static void main(String[] args) {

	Thread thread = Thread.currentThread();

	ClassLoader classLoader =
			thread.getContextClassLoader();

	try {

	//use class loader to create a Class for
	//a given class name
	Class subclass = Class.forName(CHILD_CLASS_NAME,
				true, classLoader);
	System.out.println("Subclass canonical name: ["
		+ subclass.getCanonicalName() + "]");

	//get superclass Class from the subclass
	Class superclass = subclass.getSuperclass();
	System.out.println("Super class canonical name: ["
		+ superclass.getCanonicalName() + "]");

	//get the type of subclass
	Type subtype = subclass.getGenericSuperclass();
	System.out.println("Subclass type: ["
		+ subtype.toString() + "]");

	//check whether my subclass type starts with
	//class name of my super type, if true - my subclass
	//is indeed of type of my superclass
	if (subtype.toString().startsWith(
			superclass.getCanonicalName())) {

	System.out.println("Class: [" +
		subclass.getSimpleName()
		+ "] is type of ["
		+ superclass.getSimpleName() + "]");
	}

	// 'null' assumes empty array
	Method method = subclass.getMethod(PARENT_METHOD,
				(Class[]) null);

	/*
	* public Object invoke(Object obj, Object args)
	*
	* If the underlying method is static, then the specified 'obj'
	* argument is ignored. It may be null.
	*
	* Parameters: obj - the object the underlying method is invoked
	* from args - the arguments used for the method call
	*/

	System.out.println("Invoking static parent method: [" +
		PARENT_METHOD + "] on extending subclass...");
	Object objResult = method.invoke(null, (Object[]) null);

	System.out.println("Some value: [" + objResult.toString() + "]");

	}
	catch (SecurityException e) {
		e.printStackTrace();
	}
	catch (NoSuchMethodException e) {
		e.printStackTrace();
	}
	catch (IllegalArgumentException e) {
		e.printStackTrace();
	}
	catch (IllegalAccessException e) {
		e.printStackTrace();
	}
	catch (InvocationTargetException e) {
		e.printStackTrace();
	}
	catch (ClassNotFoundException e) {
		e.printStackTrace();
	}

	}
}

Basically what happens is: iI compare whether the Type of my subclass: AbstractParentEntity starts with the canonical name of my super class: AbstractParentEntity.

When using generics, this will always return true if my subclass extended from my super class.

When I successfully determined the subclass type, i invoked static method using reflection. Below you can see the output of my program:

Subclass canonical name: [ChildEntity]
Super class canonical name: [AbstractParentEntity]
Subclass type: [AbstractParentEntity<ChildEntity>]
Class: [ChildEntity] is type of [AbstractParentEntity]
Invoking static parent method: [getSomeLongValue] on extending subclass...
Some value: [999]

If you never used generics in Java (those who moved on to Java from C++ will know what generics are), Sun offers a nice generics introduction tutorial which explains the basics.

Basically generics allow you to abstract over types. When you use generics in your code it becomes safer and clearer. So i think it worth while having a pick at it :)

Comments / corrections / flames?

JBoss Clustering – Shared State Across Cluster Partition

Did you know that if you have a JBoss cluster, HA singletons service beans on each can share a common memory state? State is a memory space shared by all HA singletons service beans in a cluster. It is possible save an object to the state using HA singleton service bean on one node, and to retrieve that object on another node.

The implementation is very easy. Lets assume you have two nodes in a cluster, on each node you have HA singleton service bean running. Lets call them “service A” and “service B“. Your service beans should extend from HASingletonSupport class. HASingletonSupport in its turn extends from HAServiceMBeanSupport. The latter gives you access to two methods:

public void setDistributedState(String key, Serializable value) throws Exception

and

public Serializable getDistributedState(String key)

These are convenience which allow you to save and retrieve objects from shared state. It can be whatever you want – primitive data types or even POJOs.

Lets say somewhere inside you service A, you have the following method:

public void setObjectToSharedState()  {
	String key = "test-1";
	String value = "Testing memory state";

 	try {
		this.setDistributedState(key, value);
		}
	catch (Exception e)  {
		e.printStackTrace();
		}
}

Now, what has happened is that you, by using your service A saved into the shared memory a String value, under a key name “test-1“. This key name you can use in your service B to retrieve its corresponding value from the shared memory, in this case a String. Lets say the following method is sitting somewhere in your service B bean, on another node in the cluster:

	public void getObjectFromSharedState()  {
		String key = "test-1";
		String value = (String) this.getDistributedState(key);
		System.out.println("The value is: " + value);
	}

By specifying a key name, you can retrieve back from the shared memory what ever is saved under that key. You just have to know the type to cast it back to.
Also keep in mind: if you specify wrong key name, the retrieved object will be null.

Shared memory state is another option, if you want to share something between your nodes, and for some reason you don’t want to use JBoss cache.

JBoss Clustering – How Many Nodes in the Cluster?

If you want to know how many nodes there are in the current cluster partition, all you have to do is to ask HAPartition for the node list. HAPartition represents your cluster partition, and it contains all the information you need to know about your cluster and the nodes: their host names, IPs, position in the cluster view.

Lets assume you have a service bean that extends from HASingletonSupport. HASingletonSupport in its turn extends from HAServiceMBeanSupport.

HAServiceMBeanSupport is the one who gives you access to HAPartition object.

The code to request for HAPartition object and node list that you see below , you can put somewhere in your service bean:

	HAPartition partition = getPartition();
	ClusterNode[] nodes = partition.getClusterNodes();
	System.out.println(nodes.length);

ClusterNode object represents your node in the cluster. It contains information about node’s host name, its internet address and a few more things. getClusterNodes(), returns to you an array contains as many ClusterNode objects as you have currently in your cluster. So by getting the value of array length, you will know how many nodes your cluster has.

Another way, is to do practically the same, but to request from a HAPartition a current view of your cluster:

	HAPartition partition = getPartition();Vector v = partition.getCurrentView();

	System.out.println(partition.getCurrentView().size());

	for (Object o : v) {
		System.out.println(o.toString());
	}

The view, which is a Vector contains information about node sockets. When printed, it will return to you a String representation of node ip + port: xxx.xxx.xxx.xxx:port. Also by printing size of the Vector, you will get number of nodes in the cluster.

Important note:
I noticed there is some delay happening from the time when node leaves the cluster to the time when HAPartition returns an updated view. In another words – after node has left the cluster and topology change has occurred, the HAPartition may return to you an old view still containing the dead node. So be careful.

Also, getPartition() may return null, if super.startService() hasnt been called. Have a look at implementation of HAServiceMBeanSupport and my other post JBoss Clustering – HASingleton service.

Thats it :)

Drools – Working with Stateless Session

Drools (now it is also called JBoss Rules) is an amazing open source framework which allows you to create business rules management system for your application. I got introduced to Drools while working on a project at my current company.

It is very easy to use and implement it and it is very efficient. For example instead of having dozens of if-else statements for some application business rules, you can use Drools to create a rule engine with your defined rules and pass your objects through the rule engine.

For example, in your application that deals with student objects, you can create a rule that checks whether the student has paid his fees for the next semester, if not – send him/her reminder email… etc..

In this example i want to show how to work with Stateless drools session to retrieve results from the global variable. I know that at this point its a bit not clear, so i will try to explain as I go… or you can simply visit their website, the link is under “Useful Links” section on the right hand side…

In addition to that you can always join their IRC channel #drools, the drools team is very helpful and i owe my special thanks to a fellas name mic_hat and conan there, that had a lot of patience for me ;)

For my example i prepared a simple POJO, DRL and DSL files and a test client.
DRL is the file that contains my rules. DSL is the expandable template for DRL.

Drools allow you to write your rules using plain human language in DRL, and then in DSL template you can specify to what programming code the human sentence corresponds to. The following explains what I mean:

My DRL file with 2 rules in it:

package com.test.drools.rules;

expander mydsl.dsl;

import com.test.drools.entities.Pojo;

global java.util.List list;

rule "1"
salience 1000
auto-focus true

when
      The blog name is "Java Beans dot Asia"
then
      Log "The blog name was matched"
end

rule "2"
salience 900

when
      This post was created in "May"
then
      Log "The blog post month was matched"
end

“expander mydsl.dsl” – file name of my DSL template.
“global java.util.List list” – a global variable, which is the type of List. Global variable you can use for storing some results, log messages and even objects.
“salience” – the priority which rules should be executed first.
“auto-focus” – the rule that has auto-focus will get executed first, basically the starting point of execution.

My DSL template file for my DRL:

[condition][]The blog name is "{name}"= poj : Pojo( blogName == "{name}")
[condition][]This post was created in "{month}"= poj : Pojo( postMonth == "{month}")
[consequence][]Log "{message}"= list.add(new String("{message}"));

As you can see “This post was created in “arg”" will expands into “poj : Pojo( postMonth == “{month}”)”, where the value of “arg” will be compared to the value of postMonth variable in my POJO.

Keep in mind that you do not have to use DSL template, you can use only DRL file if you want to and have your source code there. Using the template makes your rules very readable.

My POJO:

package com.test.drools.entities;

import java.io.Serializable;

public class Pojo implements Serializable {

	private String blogName;
	private String postMonth;

	public Pojo() {

	}

	public String getBlogName() {
		return blogName;
	}

	public void setBlogName(String blogName) {
		this.blogName = blogName;
	}

	public String getPostMonth() {
		return postMonth;
	}

	public void setPostMonth(String postMonth) {
		this.postMonth = postMonth;
	}
}

My client:

package com.test.drools.client;

import java.io.IOException;
import java.io.InputStreamReader;
import java.io.Reader;
import java.util.ArrayList;
import java.util.Iterator;
import java.util.List;

import org.drools.RuleBase;
import org.drools.RuleBaseFactory;
import org.drools.StatelessSession;
import org.drools.StatelessSessionResult;
import org.drools.base.CopyIdentifiersGlobalExporter;
import org.drools.compiler.DroolsError;
import org.drools.compiler.DroolsParserException;
import org.drools.compiler.PackageBuilder;
import org.drools.compiler.PackageBuilderErrors;

import com.test.drools.entities.Pojo;

public class Client {

//path to the DRL file inside my JAR
final static String DRL_URL =
		"/com/test/drools/rules/mydrl.drl";
//path to the DSL file inside my JAR
final static String DSL_URL =
		"/com/test/drools/rules/mydsl.dsl";

public static void main(String[] args) {

   //Instantiate and initialize the POJO.
   Pojo p1 = new Pojo();
   p1.setBlogName("Java Beans dot Asia");
   p1.setPostMonth("May");

   //Calling for private method to compile a RuleBase
   RuleBase ruleBase = getRuleBase();

   //Instantiating StatelessSession
   StatelessSession session =
			ruleBase.newStatelessSession();

   //Setting global variable:
   //the name 'list' is the same name mentioned in DRL:
   //global java.util.List list;
   session.setGlobal("list", list);

   //specifying the global name that should be exported
   session.setGlobalExporter(
   new CopyIdentifiersGlobalExporter(
				new String[]{"list"} ) );

   //executeWithResults() - stores execution results in
   //StatelessSessionResult object. That objects will
   //contain our global variable with results, that we
   //can use after the execution of stateless
   //session is finished.
   StatelessSessionResult result =
			session.executeWithResults(p1);

   //get global variable and cast back to
   //the type of List
   List retrievedList = (List) result.getGlobal("list");

   if (retrievedList != null &&
			retrievedList.size() > 0) {

   for (Iterator i = retrievedList.iterator();
					i.hasNext();) {
	System.out.println((String) i.next());
   }
}

}

private static RuleBase getRuleBase() {

//Create a new package builder
PackageBuilder builder = new PackageBuilder();

try {

//call for private method to get the DRL
Reader drl = getSourceDrl();

//call for private method to get the DSL
Reader dsl = getDsl();

//Add rule package to the builder using drl and
//dsl Reader objects
builder.addPackageFromDrl(drl, dsl);

//Check whether our DRL and DSL files had any
//errors when trying to create a rule package.
//If DRL and/or DSL had any errors we wont be able
//to create a rule package and a new RuleBase.
PackageBuilderErrors errors = builder.getErrors();

DroolsError[] error = errors.getErrors();

if (error.length > 0) {
for (DroolsError err : error) {
System.out.println("Errors are: " + err.getMessage());
}
}

} catch (DroolsParserException e1) {
e1.printStackTrace();
} catch (IOException e1) {
e1.printStackTrace();
}

//Get new RuleBase object. RuleBase is where
//we will get Stateless session object from.
RuleBase ruleBase = RuleBaseFactory.newRuleBase();

try {
//Add package with our rules to the RuleBase.
//This is when the RuleBase is actually compiled
ruleBase.addPackage(builder.getPackage());
} catch (Exception e1) {
e1.printStackTrace();
}

return ruleBase;
}

private static Reader getDsl()
			throws IOException {
return new InputStreamReader(Client.class
.getResourceAsStream(DSL_URL));
}

private static Reader getSourceDrl()
			throws IOException {
return new InputStreamReader(Client.class
.getResourceAsStream(DRL_URL));
}

}

I will try to give now a brief explanation what is actually happening:
When session.executeWithResults(p1); is executes, rule engine will apply the rules on a p1 POJO object. If rules will be matched, then the result will be stored in the global variable.

For example:
If value of “blogName” variable inside my p1 POJO object will be equal to “Java Beans dot Asia”, then the rule#1 in my DRL will be matched, and the result “The blog name was matched” will be stored in my global List.

The final output of the program will be as follows:

"The blog name was matched"
"The blog post month was matched"

This example was very simple, I had only two rules where i did comparison of String literals. But Drools definitely has the capability to create a friendly business rule system with thousands of rules if needed, while staying user friendly for both developers and business clients. I think its worth while checking it out :)

I’ve included a source code and jUnit test case for this tutorial if you want to have a look at it and try it your self.

drools – working with stateless session sourcecode

Stateless Beans and Annotations

Since EJB 3.0, it is possible to use JDK 5.0 metadata annotations to create EJB 3.0 Java beans. This makes the development very easy. The only drawback here as I see it, that in case when you want to change/add/remove annotation you actually have to recompile the class.

The example below shows how to create a stateless enterprise Java bean using annotations. The bean implements remote interface.

The interface:

	package com.test.stateless.interfaces;

	import javax.ejb.Remote;

	@Remote
         public interface StatelessTestRemote {
		public void doSomething();
	}

Annotation “Remote” specifies that the class is remote interface of the bean.

The bean:


package com.test.stateless.beans;
import javax.ejb.Stateless;
import org.jboss.annotation.ejb.RemoteBinding;
import com.test.stateless.interfaces.StatelessTestRemote;

@Stateless
@RemoteBinding(jndiBinding = &quot;com/test/stateless/beans/StatelessTestBean/remote&quot;)

	public class StatelessTestBean implements StatelessTestRemote {

	public void doSomething()  {

	}
}

Annotation “Stateless” specifies that the class is stateless bean.
Annotation “RemoteBinding” specifies JNDI name for the interface.

Keep in mind, that you can use @Remote in implementation bean itself, and it doesnt have to be inside the interface class. For example:

@Stateless@Remote ({StatelessTestRemote.class})
@RemoteBinding(jndiBinding = &quot;com/test/stateless/beans/StatelessTestBean/remote&quot;)

public class StatelessTestBean implements StatelessTestRemote {

Thats it :)

Update:
As Laird Nelson has pointed out in his response to this post, it is possible to override annotations in the XML descriptor. I looked into it, and yes indeed – EJB 3.0 allows to override the behavior of annotations in the source code. Although there are some limitations which annotations can be overridden. You can refer to article JBoss EJB 3.0 partial deployment descriptors to find more detailed explanation about it :)

JBoss Clustering – HASingleton Service

Have you ever dealt with clustered singleton service? How to determine which cluster node is the master? Well, if I am the current node, I can simply ask whether I am the master or not. But what if I already know that the current node is not the master, and I want to determine which node among other nodes in the cluster is the master?

First I would like to give brief summary about HASingleton service (HA stands for High Availability).

Summary:
HASingleton service, is a service that is deployed on every node in a cluster, but runs only on one node, while the other nodes remain passive. The node that the service runs on, is the master node.

How does JBoss selects the master node?
Well the first node in a cluster will become master node. If existing master node will leave the cluster as a result of a shutdown for example, another node is selected as master from the remaining nodes.

Master node can control which tasks will get executed, and how many times. HASingletons also have the ability to share a memory state across clustered partition. Something like caching …

Solution:
Lets assume that I have a service bean that extends from HASingletonSupport class. HASingletonSupport in its turn extends from HAServiceMBeanSupport
and implements two interfaces: HASingletonMBean and HASingleton. All of them give me those wonderful APIs that can tell me whether the current node is the master or not, what the status of my cluster, how many nodes etc. etc.

public class MyHAService extends HASingletonSupport implements
        MyHAServiceMBean {

private static Logger logger =
		Logger.getLogger(MyHAService.class);

 public void startService() throws Exception {
        logger.info(" *** STARTED MY SINGLETON SERVICE *** ");
        super.startService();
 }

 public void stopService() throws Exception {
        logger.info(" *** STOPPED MY SINGLETON SERVICE *** ");
        super.stopService();
 }

public boolean isMasterNode() {
        return super.isMasterNode();
}

public void startSingleton() {
        logger.info(" *** CURRENT NODE IP:"
                + this.getPartition().getClusterNode()
			.getIpAddress().getHostAddress() +
			" ELECTED AS A MASTER NODE *** ");
}

public void stopSingleton() {
        logger.info(" *** CURRENT NODE IP:"
                + this.getPartition().getClusterNode()
			.getIpAddress().getHostAddress()
                + " STOPPED ACTING AS A MASTER NODE *** ");
}

public void partitionTopologyChanged(List newReplicants, int newViewID) {
        logger.info(" *** TOPOLOGY CHANGE STARTING *** ");
        super.partitionTopologyChanged(newReplicants, newViewID);

   }
}

startSingleton() – invoked when the current node elected as a master.
stopSingleton() – invoked when the current node stops acting as a master.
partitionTopologyChanged() – invoked when new node joins or leaves the cluster.

As i mentioned before, I have the ability to know whether the current node is the master node, by calling isMasterNode(). The method will return true if the node is master and false if its not.

In case I already know that the current node is not the master, I can ask the clustered partition (the cluster) which node is the master. For example I can request the current view of my cluster.

The implementation can be similar to the method below which you can have inside your service bean:

private String getMasterSocket() {

        HAPartition partition = this.getPartition();

        if (partition != null) {

            if (partition.getCurrentView() != null) {
                return partition.getCurrentView().get(0).toString();

            } else {
                return null;
            }
        } else {
            return null;
        }
    }

The method above will return me a string contains port and ip of the master node, for example:

192.168.62.12:1099

The HAPartition service maintains across cluster a registry of nodes in a view order. Now, keep in mind that an order of the nodes in the view, does not necessary reflect the order nodes have joined the cluster.

So the first node in the view as you can see beloew, will be the master node.
Simple as that.

return partition.getCurrentView().get(0).toString();

Please note:
Method getPartition() may return null, if super.startService() hasnt been called. Have a look at implementation of HAServiceMBeanSupport and my other post JBoss Clustering – How many nodes in the cluster?.

Bitwise Operation In Hibernate 3

Hi all…
i encountered a small problem in doing bitwise operations with hibernate. Until now, HIbernate 2 HQL parser has supported bitwise operations. Hibernate 3 for some reason does not support it. So if you want to work around it, you have to create a custom SQLfunction and add it to the dialect, that will map the bitwise operator.

You have to create your own class, which will extend from StandardSQLFunction class and will implement SQLFunction interface. You have to override render() method from the interface mentioned before and provide your own implementation.

In my example I am showing how to use ampersand symbol (‘&’) for bitwise operation.

Important note:
Hibernate team people have changed SQLFunction interface when upgrading from version 3.0.2 to 3.0.3, so your custom implementation of bitwise operation in version 3.0.2 will not work in 3.0.3. So keep that in mind.

Update (13.Jan.2010):
According to one of the blog reader’s this example has successfully worked under Hibernate v.3.2.6!

Implementation:
I am using Sysbase dialect, so i will create a class that extends this dialect with my own implementation:

package com.project.test.dialect;

import org.hibernate.Hibernate;

public class SybaseDialect extends 
	org.hibernate.dialect.SybaseDialect{
	 
	public SybaseDialect() {
	   super();
	   registerFunction("bitwise_and", 
		new BitwiseAndFunction("bitwise_and", 
				Hibernate.INTEGER));
	} 

}

What it means basically that i am registering a function by the name of bitwise_and the return type will be of type Integer.

Now, my custom class for SQLFunction like this:

package com.project.test.dialect;

import java.util.List;

import org.hibernate.QueryException;
import org.hibernate.dialect.function.SQLFunction;
import org.hibernate.dialect.function.StandardSQLFunction;
import org.hibernate.engine.SessionFactoryImplementor;
import org.hibernate.type.Type;

public class BitwiseAndFunction
		extends StandardSQLFunction
			 implements SQLFunction {

 public BitwiseAndFunction(String name) {
		super(name);
 }

 public BitwiseAndFunction(String name, Type type) {
	super(name, type);
 }

 public String render(List args,
	SessionFactoryImplementor factory)
			throws QueryException {
   if (args.size() != 2) {
      throw new IllegalArgumentException(
	"the function must be passed 2 arguments");
   }
   StringBuffer buffer = new
	StringBuffer(args.get(0).toString());
   buffer.append(" & ").append(args.get(1));
   return buffer.toString();
 }
}

The method render() accepts two arguments, if less – exception will be thrown. Once arguments are passed to the method, it will return a String that will look something like this (assuming that the passed arguments were one (1) and two (2)):

1 & 2

In my project, I am using named queries in my hbm.xml files, so below you can see a part of my hbm.xml file where I use my bitwise function:

<query name="UserPermissionsForEnttiy">
select object(ep) from EntityPermission ep where ep.userid =
:userid and ep.entityid = :entityid and ep.classname =
:classname and bitwise_and(ep.permission,:requiredpermission) > 0 
</query>

Note the two arguments that i am passing to the bitwise_and function.

Also not forget to update your persistence XML with your custom dialect class, this is how my persistence.xml looks like:

<persistence>
<persistence-unit name="com.project.test.users">
<jta-data-source>java:/DefaultDS</jta-data-source>
   <properties>
   <property name="hibernate.dialect" 
	value="com.project.test.dialect.SybaseDialect"/>
   <property name="hibernate.hbm2ddl.auto" 
	value="update"/>
   <property name="hibernate.show_sql" 
	value="true"/>
	   .
	   .
	   .
   </properties>
</persistence-unit>
</persistence>

Please note the hibernate.dialect property. As a value i gave the canonical name of my custom SysbaseDialect class.

Once you finished implementing your custom dialect classes, you can JAR them and put into the lib directory of your JBoss instance where the rest of your Hibernate libraries are.

Do not forget to restart your JBoss – it needs to load your newly added JAR, for you to be able to use it ;)

Multiple Return Statements

Yesterday I had a thought in my mind (which is good already to have one) – how many return statements a method should have?

Whats the difference if you have method that looks like that:

public int calculateSum(int a, int b, int c)  {
	int result = -1;
	if (a % b == c)  {
	   result = c;
	}
	else if ((a + b - c) > (a - c))  {
	   result = a;
	}

	return result;
}

or the same method but looks like that:

public int calculateSum(int a, int b, int c)  {
	if (a % b == c)  {
	   return c;
	}
	else if ((a + b - c) > (a - c))  {
	   return a;
	}

	return -1;
}

I always thought (at least this is how i was taught), that having one return statement makes code look more elegant, and i should try to avoid having multiple return statements. When i asked my colleague what he thinks about this, he said that at run time, method with multiple returns will execute faster and the code looks clearer. That sounded logical enough, and after doing some more investigation i have to say that i tend to agree.

Although some may say that having multiple return statements in a method can create a confusion, and it will be hard to see all the exit places. But if this is the case, i would say that maybe the method needs to be re-factored and simplified?

Deployment of MBean Separately to Its Interface

Few days ago i came across a little nasty thing during mbean deployment. What I did was separation of my mbean class and its interface in to two archives. So first i would deploy an archive contained my interfaces and then i would deploy an archive contained my bean classes.

Why did i do it this way?

Well to minimize chances having ClassCastException. Since JBoss creates a proxy to the bean from its interface. Having the interfaces deployed seperately from the bean itself, allows me easily to modify business logic in the bean (if needed). Therefore, I will have to redeploy only the bean itself, without the need to redeploy also the interface, which will not affect my proxies to the bean in the system.

To my big surprise i got an exception:

	org.jboss.deployment.DeploymentException: Class does not expose a management interface:
	java.lang.Object; - nested throwable:
	(javax.management.NotCompliantMBeanException:Class does not expose a management interface: java.lang.Object)

I could not understand where did I go wrong. I had my mbean class:

public class MyService 	extends ServiceMBeanSupport implements MyServiceMBean {

	public void startService() throws Exception {
		...
	}

	public void stopService() {
		...
	}
}

I had my interface with extension ‘MBean’. The interface must have this extension, otherwise, you will receive a Class does not expose a management interface exception :

	public interface MyServiceMBean extends ServiceMBean {	...}

I had my jboss-service.xml:

	<?xml version="1.0" encoding="UTF-8"?>
		<server>
			<mbean code="com.example.MyService" name="com.example:service=MyService">
			</mbean>
		</server>

Finally, i discovered that it was because of the way i did the packaging. If you ever going to package mbean and its interface in two separate archives, they (mbean and its interface) must sit under the same package name!

For example: if I in archive A, put my mbean class under the package name “com.example.test“, then in archive B I have to put its interface also under “com.example.test“.