Tuesday 29 November 2011

JAX-WS, JAXB and Generics

When  JAXB 2.1 came along it introduced a new annotation namely @XmlSeeAlso. It resolved the problem where at runtime, JAX-WS would only know about classes bound by JAXB and not sub classes of those which have been bound.

By using @XmlSeeAlso on a class, sub types of the bound class can be listed thereby allowing them to be bound also and therefore reachable by JAX-WS when marshalling and unmarshalling.

It can also be used to overcome the problem of using generics in a JAXB annotated class as the following simple example will describe. Given the below LogFile class, it will only be marshalled/unmarshalled if the T at runtime is bound.

@XmlRootElement
@XmlAccessorType(XmlAccessType.FIELD)
public class LogFile<T> {

    @XmlElement(name="logFileLine")
    private List<T> logFileLines;

    public LogFile() {
        logFileLines = new ArrayList<T>();
    }

    public LogFile(List<T> logFileLines) {
        this.logFileLines = logFileLines;
    }

    public List<T> getLogFileLines() {
        return logFileLines;
    }

    public void setLogFileLines(List<T> logFileLines) {
        this.logFileLines = logFileLines;
    }

    public void addLogFileLine(T logFileLine) {
        this.logFileLines.add(logFileLine);
    }

}


If LogFile contains a collection of eg Strings then this will work fine but if T is a class not bound, eg com.city81.domain.LogLine then an error similar to below will be thrown:

javax.xml.bind.MarshalException - with linked exception: [javax.xml.bind.JAXBException: class com.city81.domain.LogLine nor any of its super class is known to this context.]

To resolve this, the class LogLine needs to be included the @XmlSeeAlso annotation.

@XmlRootElement
@XmlAccessorType(XmlAccessType.FIELD)
@XmlSeeAlso(LogLine.class)
public class LogFile<T> {

    ....

}


This does of course mean you need to know what potential classes can be T before runtime but at least it means you can use Generics on JAXB annotated classes.

Thursday 17 November 2011

Using Spring Expression Language for Server Specific Properties

Spring 3.0 introduced Spring Expression Language (SpEL). This post will describe how to use SpEL to load property files which are different for each server your application runs on. It also describes how server specific properties can used without having to use -D.

The problem solved was how can a WAR be deployed onto different servers without having to package up a server specific property file in each archive (or indeed bundle them all in the same WAR file.) Ideally, you would want to drop the WAR file into any environment without having to configure the container or amend the WAR, and if you wanted to change a property, you would change the property file on the server and redeploy the app (or dynamically refresh the cache of properties.)

In this example, our application needs to access a different remote server registry for each env: Dev, Test and Prod. (For ease in this example, our server names are the same as our envionments!)

Therefore we have a properties file for each env/server (dev.properties, test.properties and prod.properties). Could have a local.properties file on each server but prefixing them with a server name helps distinguish them. An example property file is shown below:

rmiRegistryHost=10.11.12.13
rmiRegistryPort=1099

We have a bean which requires these properties, so it's constructor args contain property placeholders:

<bean id="lookupService" class="com.city81.rmi.LookupService" scope="prototype">
  <constructor-arg index="0" value="${rmiRegistryHost}" /> 
  <constructor-arg index="1" value="${rmiRegistryPort}" />
</bean>


For the above bean to be loaded, the properties need to be loaded themselves and this is done via the PropertyPlaceholderConfigurer class. The location and name of the server specific property file is added as one of the locations the configurer uses to search for properties.

The file is in the same location on each server but in order to know what the name is, SpEL is used. By creating a java.net.InetAddress bean, we can access the hostName of the server the application is running on by using SpEL ie #{inetAddress.hostName}. Therefore, this config doesn't have to change between environments.

<bean id="inetAddress" class="java.net.InetAddress" factory-method="getLocalHost">
</bean>
    
<bean id="propertyBean" class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer">
    <property name="ignoreResourceNotFound" value="false"/>
    <property name="ignoreUnresolvablePlaceholders" value="false"/>
    <property name="locations">
        <list>
            <value>file:/home/city81/resources/#{inetAddress.hostName}.properties</value>
        </list>
    </property>
</bean>


This is a very specific example where the property files are prefixed with the server names but it shows how SpEL can be used to solve a problem which would have taken a lot more work pre Spring 3.0.

Wednesday 16 November 2011

Spring MVC - A Bare Essentials Example Using Maven

Spring's MVC is a request based framework like Struts but it clearly separates the presentation, request handling and model layers.

In this post, I'll describe how to get the most simple of examples up and running using Maven, therefore providing a basis upon which to add more features of Spring MVC like handler mappings, complex controllers, commons validator etc..

Let's start with the pom.xml file. This will package up the project as a war file and only requires three dependencies namely the artifacts spring-webmvc, servlet-api and jstl. The spring-webmvc artifact will pull in all the other required spring jars like core, web, etc.

<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
 xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">

    <modelVersion>4.0.0</modelVersion>
    <groupId>com.city81</groupId>
    <artifactId>spring-mvc</artifactId>
    <version>0.0.1-SNAPSHOT</version>
    <packaging>war</packaging>
    <build>
        <plugins>
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-war-plugin</artifactId>
                <version>2.1-beta-1</version>
            </plugin>
        </plugins>
    </build>
    <dependencies>
        <dependency>
            <groupId>org.springframework</groupId>
            <artifactId>spring-webmvc</artifactId>
            <version>3.0.5.RELEASE</version>
            <optional>false</optional>
        </dependency>
        <dependency>
            <groupId>javax.servlet</groupId>
            <artifactId>servlet-api</artifactId>
            <version>2.5</version>
            <scope>provided</scope>
        </dependency>
        <dependency>
            <groupId>javax.servlet</groupId>
            <artifactId>jstl</artifactId>
            <version>1.2</version>
        </dependency>
    </dependencies>
</project> 


Note that the scope of the servlet-api is provided and therefore excluded from the war file. If deployed as part of the war, there'll be conflict with the servlet jar of the container, and there'll be an error similar to the one below when deploying to Tomcat:

INFO: Deploying web application archive spring-mvc-0.0.1-SNAPSHOT.war
15-Nov-2011 16:05:27 org.apache.catalina.loader.WebappClassLoader validateJarFile
INFO: validateJarFile(C:\apache-tomcat-6.0.33\webapps\spring-mvc-0.0.1-SNAPSHOT\WEB-INF\lib\servlet-api-2.5.jar) - jar not loaded. See Servlet Spec 2.3, section
 9.7.2. Offending class: javax/servlet/Servlet.class


Next the web.xml located in the /webapp/WEB-INF folder:

<?xml version="1.0" encoding="UTF-8"?>
<web-app xmlns="http://java.sun.com/xml/ns/javaee"
 xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
 xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd"
 version="2.5">

    <display-name>Spring MVC Example</display-name>

    <listener>
        <listener-class>org.springframework.web.context.ContextLoaderListener</listener-class>
    </listener>

    <servlet>
        <servlet-name>springMVCExample</servlet-name>
        <servlet-class>org.springframework.web.servlet.DispatcherServlet</servlet-class>
        <load-on-startup>1</load-on-startup>
    </servlet>

    <servlet-mapping>
        <servlet-name>springMVCExample</servlet-name>
        <url-pattern>*.htm</url-pattern>
    </servlet-mapping>

    <welcome-file-list>
        <welcome-file>home.jsp</welcome-file>
    </welcome-file-list>

</web-app>

The context loader is a listener class called ContextLoaderListener. By default this will load the config in the /WEB-INF/applicationContext.xml but you can specify more files by adding a contextConfigLocation param and list one or more xml files. For example,


    contextConfigLocation    /WEB-INF/example-persistence.xml
        /WEB-INF/example-security.xml


If an applicationContext.xml file isn't present when not using a context param list, then the WAR won't deploy properly. An example empty applicationContext.xml is below:

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
 xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
 xmlns:aop="http://www.springframework.org/schema/aop"
 xsi:schemaLocation="http://www.springframework.org/schema/context 
    http://www.springframework.org/schema/context/spring-context.xsd
    http://www.springframework.org/schema/util 
    http://www.springframework.org/schema/util/spring-util.xsd
    http://www.springframework.org/schema/beans
    http://www.springframework.org/schema/beans/spring-beans.xsd
    http://www.springframework.org/schema/aop    
    http://www.springframework.org/schema/aop/spring-aop.xsd">

</beans>


The servlet's configuration does not need to be explicitly specified as it can be loaded by following the same naming convention of the servlet, in this case springMVCExample-servlet.xml. The servlet is the front controller which delegates requests to other parts of the system.

The servlet-mapping tags in the web.xml denote what URLs the DispatcherServlet will handle. In this example HTML.

Also included in the web.xml by way of the welcome-file, is a default home page. This doesn't have to be included but the page can be used to forward a request as can be seen later in the post.

As mentioned previously, the servlet's config is in it's own XML file. This describes the mapping between the a URL and the Controller which will handle the request. It also contains the a ViewResolver which maps the view name in the ModelAndView to an actual view. The springMVCExample-servlet.xml is shown below:

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
 xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
 xsi:schemaLocation="http://www.springframework.org/schema/beans  
    http://www.springframework.org/schema/beans/spring-beans-3.0.xsd">

    <bean name="/example.htm" class="com.city81.spring.mvc.ExampleController">
    </bean>

    <bean id="viewResolver" class="org.springframework.web.servlet.view.InternalResourceViewResolver">
        <property name="prefix">
            <value>/WEB-INF/jsp/</value>
        </property>
        <property name="suffix">
            <value>.jsp</value>
        </property> 
    </bean>

</beans>


The ExampleController class is shown below. It extends AbstractController therefore must implement the method handleRequestInternal(HttpServletRequest request, HttpServletResponse response). The return value of this method is a ModelAndView object. This object is constructed by passing in the view name (example), the model name (message) and the model object (in this case a String)

package com.city81.spring.mvc;

import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
import org.springframework.web.servlet.ModelAndView;
import org.springframework.web.servlet.mvc.AbstractController;

public class ExampleController extends AbstractController {
  
    protected ModelAndView handleRequestInternal(
        HttpServletRequest request, HttpServletResponse response) 
            throws Exception {
        ModelAndView modelAndView = new ModelAndView("example", "message", "Spring MVC Example");

        return modelAndView;
    }

}


The view jsp will then have the ${message} value populated by the model object from the ModelAndView. The example.jsp is below and resides in the /webapp/WEB-INF/jsp folder:

<%@ page contentType="text/html;charset=UTF-8" language="java" %>
<%@ taglib prefix="c" uri="http://java.sun.com/jsp/jstl/core" %>
<html>
    <head>
        <title>Spring MVC Example</title>
    </head>
    <body>
        <h2>Welcome to the Example Spring MVC page</h2>
        <h3>The message text is:</h3>
        <p>${message}</p>
    </body>
</html>


As mentioned previously, a default home page can included in the web.xml and instead of displaying html etc., it can be used to redirect requests to another URL. The below home.jsp redirects requests to example.htm:

<%@ page contentType="text/html;charset=UTF-8" language="java" %>
<%@ taglib prefix="c" uri="http://java.sun.com/jsp/jstl/core" %>
<c:redirect url="/example.htm"/>


Deploying the above to a container like Tomcat should result in a page like the following:



This is just the very basics of Spring MVC but later posts, will expand on the framework and show how it can be used with, for example, web services like REST.

Thursday 13 October 2011

Spring, JMS, Listener Adapters and Containers

In order to receive JMS messages, Spring provides the concept of message listener containers. These are beans that can be tied to receive messages that arrive at certain destinations. This post will examine the different ways in which containers can be configured.

A simple example is below where the DefaultMessageListenerContianer has been configured to watch one queue (the property jms.queue.name) and has a reference to a myMessageListener bean which implements the MessageListener interface (ie onMessage):

<bean id="jmsContainer" class="org.springframework.jms.listener.DefaultMessageListenerContainer">
    <property name="connectionFactory" ref="connectionFactory" /> 
    <property name="destinationName" value="${jms.queue.name}" /> 
    <property name="messageListener" ref="myMessageListener" />
</bean> 


This is all very well but means that the myMessageListener bean will have to handle the JMS Message object and process accordingly depending upon the type of javax.jms.Message and its payload. For example:

if (message instanceof MapMessage) {
    // cast, get object, do something
}


An alternative is to use a MessageListenerAdapter. This class abstracts away the above processing and leaves your code to deal with just the message's payload. For example:

<bean id="jmsContainer" class="org.springframework.jms.listener.DefaultMessageListenerContainer">
    <property name="connectionFactory" ref="connectionFactory" />
    <property name="destinationName" value="${jms.queue.name}" />
    <property name="messageListener" ref="myMessageAdapter" />
</bean> 

<bean id="myMessageAdapter" class="org.springframework.jms.listener.adapter.MessageListenerAdapter">
    <property name="delegate" ref="myMessageReceiverDelegate" />
    <property name="defaultListenerMethod" value="processMessage" />
</bean>


The delegate is a reference to a myMessageReceiverDelegate bean which has one or more methods called processMessage. It does not need to implement the MessageListener interface. This method can be overload to handle different payload types. Spring behind the scenes will determine which gets called. For example:

public void processMessage(final HashMap message) {
    // do something
}

public void processMessage(final String message) {
    // do something
}


For the given approach though, only one queue can be tied to the container. Another approach is to tie many listeners (therefore many queues) to the one container, The below Spring XML, using the jms namespace, shows how two listeners for different queues can be tied to one container:

<jms:listener-container container-type="default" 
  connection-factory="connectionFactory" acknowledge="auto">  
    <jms:listener destination="${jms.queue.name1}" ref="myMessageReceiverDelegate" method="processMessage" />  
    <jms:listener destination="${jms.queue.name2}" ref="myMessageReceiverDelegate" method="processMessage" />  
</jms:listener-container>


The myMessageReceiverDelegate bean is treated as an adapter delegate, therefore does not need to implement the MessageListener interface. Each listener can have a different delegate but for the above example, all messages arriving at the two queues are processed by the one receiver bean ie myMessageReceiverDelegate.

If there is a need to check the message type and extract the payload, then the listener can use a class which implements the MessageListener interface (eg the myMessageListener bean used in the first example). The onMessage method will then be called when messages arrive at the specified destination:

<jms:listener-container container-type="default" 
  connection-factory="connectionFactory" acknowledge="auto">  
    <jms:listener destination="${jms.queue.name1}" ref="myMessageListener" />  
    <jms:listener destination="${jms.queue.name2}" ref="myMessageListener" />  
</jms:listener-container>

Friday 16 September 2011

Spring JMS with ActiveMQ

ActiveMq is a powerful open source messaging broker, and is very easy and straightforward to use with Spring as the below classes and XML will prove. The example below is the bar minimum needed to get up and running with transactions and message converters.

On the sending side, the ActiveMq connection factory needs to be created with the url of the broker. This in turn is used to create the Spring JMS connection factory and as no session cache property is supplied the default cache is one. The template is then used in turn to create the Message Sender class:

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
 xmlns:amq="http://activemq.apache.org/schema/core" 
 xmlns:context="http://www.springframework.org/schema/context"
 xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
 xmlns:tx="http://www.springframework.org/schema/tx"
 xsi:schemaLocation="http://www.springframework.org/schema/beans
  http://www.springframework.org/schema/beans/spring-beans.xsd  
  http://activemq.apache.org/schema/core 
  http://activemq.apache.org/schema/core/activemq-core-5.5.0.xsd  
  http://www.springframework.org/schema/context 
  http://www.springframework.org/schema/context/spring-context.xsd  
  http://www.springframework.org/schema/jms 
  http://www.springframework.org/schema/jms/spring-jms.xsd
  http://www.springframework.org/schema/tx
  http://www.springframework.org/schema/tx/spring-tx.xsd">

    <!-- Activemq connection factory -->
    <bean id="amqConnectionFactory" class="org.apache.activemq.ActiveMQConnectionFactory">
        <constructor-arg index="0" value="${jms.broker.url}"/>
    </bean>

    <!-- ConnectionFactory Definition -->
    <bean id="connectionFactory" class="org.springframework.jms.connection.CachingConnectionFactory">
        <constructor-arg ref="amqConnectionFactory" />
    </bean>

    <!--  Default Destination Queue Definition-->
    <bean id="defaultDestination" class="org.apache.activemq.command.ActiveMQQueue">
        <constructor-arg index="0" value="${jms.queue.name}"/>
    </bean>

    <!-- JmsTemplate Definition -->
    <bean id="jmsTemplate" class="org.springframework.jms.core.JmsTemplate">
        <property name="connectionFactory" ref="connectionFactory" />
        <property name="defaultDestination" ref="defaultDestination" /> 
    </bean>

    <!-- Message Sender Definition -->
    <bean id="messageSender" class="com.city81.jms.MessageSender">
        <constructor-arg index="0" ref="jmsTemplate" />
    </bean>

</beans> 


An example sending class is below. It uses the convertandSend method of the JmsTemplate class. As there is no destination arg, the message will be sent to the default destination which was set up in the XML file:

import java.util.Map;
import org.springframework.jms.core.JmsTemplate;

public class MessageSender {

    private final JmsTemplate jmsTemplate;

    public MessageSender(final JmsTemplate jmsTemplate) {
        this.jmsTemplate = jmsTemplate;
    }

    public void send(final Map map) {
        jmsTemplate.convertAndSend(map);
    }

}


On the receiving side, there needs to be a listener container. The simplest example of this is the SimpleMessageListenerContainer. This requires a connection factory, a destination (or destination name) and a message listener.

An example of the Spring configuration for the receiving messages is below:

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
 xmlns:amq="http://activemq.apache.org/schema/core"

 xmlns:jms="http://www.springframework.org/schema/jms"
 xmlns:context="http://www.springframework.org/schema/context"
 xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
 xsi:schemaLocation="http://www.springframework.org/schema/beans
  http://www.springframework.org/schema/beans/spring-beans.xsd  
  http://activemq.apache.org/schema/core 
  http://activemq.apache.org/schema/core/activemq-core-5.5.0.xsd  
  http://www.springframework.org/schema/context 
  http://www.springframework.org/schema/context/spring-context.xsd  
  http://www.springframework.org/schema/jms 
  http://www.springframework.org/schema/jms/spring-jms.xsd">

    <!-- Activemq connection factory -->
    <bean id="amqConnectionFactory" class="org.apache.activemq.ActiveMQConnectionFactory">
        <constructor-arg index="0" value="${jms.broker.url}"/>
    </bean>

    <!-- ConnectionFactory Definition -->
    <bean id="connectionFactory" class="org.springframework.jms.connection.CachingConnectionFactory">
        <constructor-arg ref="amqConnectionFactory" />
    </bean>  

    <!-- Message Receiver Definition -->
    <bean id="messageReceiver" class="com.city81.jms.MessageReceiver">
    </bean>

    <bean class="org.springframework.jms.listener.SimpleMessageListenerContainer">
        <property name="connectionFactory" ref="connectionFactory" />
        <property name="destinationName" value="${jms.queue.name}" />
        <property name="messageListener" ref="messageReceiver" />
    </bean>

</beans> 


The listening/receiving class needs to extend javax.jms.MessageListener and implement the onMessage method:

import javax.jms.MapMessage;
import javax.jms.Message;
import javax.jms.MessageListener;

public class MessageReceiver implements MessageListener {

    public void onMessage(final Message message) {
        if (message instanceof MapMessage) {
            final MapMessage mapMessage = (MapMessage) message;
            // do something
        }
    }

}


To then send a message would be as simple as getting the sending bean from the bean factory as shown in the below code:

    MessageSender sender = (MessageSender) factory.getBean("messageSender");
    Map map = new HashMap();
    map.put("Name", "MYNAME");
    sender.send(map);


Will try to expand and build up JMS and Spring articles with examples of using transactions and other brokers like MQSeries.

Monday 6 June 2011

JUnit, DBUnit and Oracle

As previously blogged, DBUnit is a powerful addition to your unit test armoury. Having used it with Oracle, there are a few nuances which are worth writing about and therefore remembering!

When creating a connection to an Oracle database, DBUnit will populate a Map of table names. By default these will not be prefixed with the schema name. As a result, you can get duplicate table names. An example of an exception raised for the duplicate table WWV_FLOW_DUAL100 is shown below:

org.dbunit.database.AmbiguousTableNameException: WWV_FLOW_DUAL100 at org.dbunit.dataset.OrderedTableNameMap.add(OrderedTableNameMap.java:198)
 at org.dbunit.database.DatabaseDataSet.initialize(DatabaseDataSet.java:231)
 at org.dbunit.database.DatabaseDataSet.getTableMetaData(DatabaseDataSet.java:281)
 at org.dbunit.operation.AbstractOperation.getOperationMetaData(AbstractOperation.java:80)
 at org.dbunit.operation.AbstractBatchOperation.execute(AbstractBatchOperation.java:140)


To resolve this, set the database config property FEATURE_QUALIFIED_TABLE_NAMES to be true. This will make sure all tables in the Map are unique. As a consequence of this, the table names in the XML data file will need to be prefixed with the schema name.

Another useful database config property is PROPERTY_DATATYPE_FACTORY. In the XML data file, if there are dates with the time element set then the time element will be ignored unless the database config property PROPERTY_DATATYPE_FACTORY is set to new Oracle10DataTypeFactory() (or the equivalent data factory for the version of Oracle being used.)

An example of setting these values in a @BeforeClass annotated JUnit method is shown below:

@BeforeClass
public static void loadDataset() throws Exception {

    // database connection
    ResourceCache resourceCache = ResourceCache.getInstance();
    String driverClassString = resourceCache.getProperty("datasource.driver.class.name");
    String databaseURL = resourceCache.getProperty("datasource.url");
    String username = resourceCache.getProperty("test.datasource.username");
    String password = resourceCache.getProperty("test.datasource.password");

    Class driverClass = Class.forName(driverClassString);
    Connection jdbcConnection = DriverManager.getConnection(databaseURL, username, password);              
    connection = new DatabaseConnection(jdbcConnection);
    DatabaseConfig config = connection.getConfig();
    config.setProperty(DatabaseConfig.FEATURE_QUALIFIED_TABLE_NAMES, true);
    config.setProperty(DatabaseConfig.PROPERTY_DATATYPE_FACTORY, new Oracle10DataTypeFactory());         
        
    FlatXmlDataSetBuilder flatXmlDataSetBuilder = new FlatXmlDataSetBuilder();
    flatXmlDataSetBuilder.setColumnSensing(true);
    dataset = flatXmlDataSetBuilder.build(Thread.currentThread()
        .getContextClassLoader()
        .getResourceAsStream("Test-dataset.xml"));
 }

Wednesday 4 May 2011

Using Spring's StoredProcedure class with Oracle Spatial JGeometry

The Spring framework provides a neat wrapper class you can extend when you want to call a stored procedure. Sometimes though you need to extend to use a native connection because, for example, you need to pass an ORACLE Geometry type to the stored procedure.

The below code wraps a Create_Geometry_Line stored procedure which takes a geometry object. The calling code will use the executeMap method to pass in a Map of in parameters. It converts the list of doubles (which represent the points of the line) to an object of type STRUCT.

package uk.co.city81.persistence.dao.geometry;

import java.sql.Connection;
import java.sql.SQLException;
import java.sql.Types;
import java.util.Map;

import oracle.jdbc.OracleTypes;
import oracle.spatial.geometry.JGeometry;
import oracle.sql.STRUCT;

import org.springframework.dao.DataAccessException;
import org.springframework.jdbc.core.JdbcTemplate;
import org.springframework.jdbc.core.SqlOutParameter;
import org.springframework.jdbc.core.SqlParameter;
import org.springframework.jdbc.object.StoredProcedure;


public class CreateGeometryLineDAO extends StoredProcedure {

    private Connection nativeConn = null;

    public static final String CREATE_GEOMETRY_LINE = "geometry_pkg.Create_Geometry_Line";

    // Constructor
    public CreateGeometryLineDAO(JdbcTemplate jdbcTemplate) {
        setJdbcTemplate(jdbcTemplate);
        setFunction(true);
        setSQL();
        declareParameter(new SqlOutParameter("l_Return", Types.INTEGER));
        declareInParameters();
        declareOutParameters();
        compile();
    }

    // Get the native connection
    protected Connection getNativeConn() throws SQLException {  
        if ((nativeConn == null) || nativeConn.isClosed()) {
            nativeConn = this.getJdbcTemplate().getNativeJdbcExtractor()
                .getNativeConnection(this.getJdbcTemplate()
                .getDataSource().getConnection());
        }
        return nativeConn;   
    }

    protected void declareInParameters() {
        declareParameter(new SqlParameter("p_geom_data",OracleTypes.STRUCT)); 
    }

    protected void declareOutParameters() {
        // declare out params
    }

    protected void setSQL() {
        setSql(CREATE_GEOMETRY_LINE);
    }

    /**
     * Execute the stored procedure
     * 
     * @param inParamMap a map of the stored procedure parameters
     */
    public Map executeMap(Map inParamMap) {
        Map outParamMap = null;
        try {  
            if (inParamMap.get("p_geom_data") != null) {
                java.util.List<Double> ordinates 
                    = (java.util.List<Double>) inParamMap.get("p_geom_data");
                double[] ordinateDoubles = new double[ordinates.size()];
                int count = 0;
                for (Double ordinate : ordinates) {
                    ordinateDoubles[count] = ordinate.doubleValue();
                    count++;
                }

                int[] elemInfo = new int[]{1,2,1};
                JGeometry j_geom = new JGeometry(
                    JGeometry.GTYPE_CURVE,8307, elemInfo,ordinateDoubles);        
                STRUCT obj = JGeometry.store(j_geom, getNativeConn());
                inParamMap.put("p_geom_data", obj);
            }

            outParamMap = super.execute(inParamMap);

        } catch (DataAccessException daex) {
            // throw exception
        } catch (SQLException e) {
            // throw exception
        }

        return outParamMap;
    }

}


The jdbcTemplate can be injected into the DAO's constructor. The extracts from the context xml are below:

    <bean id="datasource" class="org.springframework.jdbc.datasource.DriverManagerDataSource" 
        destroy-method="close">

        <property name="driverClassName">
            <value>oracle.jdbc.OracleDriver</value>
        </property>
        <property name="url">
            <value>jdbc:oracle:thin:@server:port:db</value>
        </property>
        <property name="username">
            <value>username</value>
        </property>
        <property name="password">
            <value>password</value>
        </property>
    </bean>

    <bean id="nativeJdbcExtractor" class="org.springframework.jdbc.support.nativejdbc.SimpleNativeJdbcExtractor" 
      lazy-init="true"/>

    <bean id="jdbcTemplate" class="org.springframework.jdbc.core.JdbcTemplate">
        <property name="dataSource">
            <ref bean="datasource" />
        </property>
        <property name="nativeJdbcExtractor">
            <ref bean="nativeJdbcExtractor" />
        </property>
    </bean>

    <bean id="createGeometryLineDAO" class="uk.co.city81.persistence.dao.geometry.CreateGeometryLineDAO">
        <constructor-arg ref="jdbcTemplate" />
    </bean>

Friday 15 April 2011

JAAS, EJB Security and Glassfish

With EJBs you can specify security by using annotations from the javax.annotation.security package. The below article describes how to setup security on a bean and access the methods of the bean via an annotation or via JAAS which has been setup on Glassfish.

The below bean class only allows the MANAGER role access to use the services exposed. In this case, findCustomerByAccountNumber. (The AccessRoles.MANAGER resolves to a string so if the string changes the CustomerServiceBean doesn't have to change.)

@RolesAllowed(AccessRoles.MANAGER)
@Stateless(mappedName = JndiResourceName.CUSTOMER_SERVICE)
@Remote(CustomerService.class)
public class CustomerServiceBean implements CustomerService {

    @Override
    public Customer findCustomerByAccountNumber(String accountNumber) {
        Customer customer = null;
        // do stuff to find customer
        return customer;
    } 

}


To call the findCustomerByAccountNumber, the code can use the @RunAs annotation as described below:

    @RunAs(AccessRoles.MANAGER)
    public void verifyCustomer(String accountNumber) {
        // do stuff 
        Customer customer = 
            customerService.findCustomerByAccountNumber(String accountNumber);
        // do more stuff 
    }


But what if the roles calling the verify method can vary ie MANAGERS, OPERATORS, ADMIN. In this scenario, we would want to authenticate the 'caller' before accessing the findCustomerByAccountNumber method. A solution to this would be to use JAAS (Java Authentication and Authorization Service.)

The principal of this is to create realms and have users and groups in the realm. There are a few steps involved which are described as follows:

Firstly, the app server (in this example Glassfish) needs to create a realm and the below describes how to do this using the command line. It assumes connection pools, user and group database tables have been created and populated, and that the flexiblejdbcrealm-0.4.jar is in the Glassfish lib dir:

asadmin --host  delete-auth-realm customer-realm
asadmin --host  create-auth-realm --classname=org.wamblee.glassfish.auth.FlexibleJdbcRealm --property="jaas.context=customerJdbcRealm:datasource.jndi=
jdbc/Customer:sql.password=select password from customeruser where username\=?:sql.groups=select g.groupname from customergroup g inner join user_group ug on g.id\=ug.group_id inner join customeruser u on ug.user_id\=u.id where u.username\=?:password.digest=MD5:password.encoding=BASE64" customer-realm

asadmin --host  --user admin set server-config.security-service.activate-default-principal-to-role-mapping=true
asadmin --host  set-log-level javax.enterprise.system.core.security=INFO
asadmin --host  set-log-level org.wamblee.glassfish.auth=INFO

The login.conf needs to have the below added to it:

customerJdbcRealm {com.mypackage.auth.CustomerLoginModule required;}

The CustomerLoginModule class extends FlexibleJdbcLoginModule and gives us the ability to intercept the login/authentication calls if we so wish. In this case, any login exceptions are being logged:

public class CustomerLoginModule extends FlexibleJdbcLoginModule implements LoginModule {

    private static final Logger SECURITY_LOGGER = Logger.getLogger("com.mypackage");

    @Override
    protected void authenticate() throws javax.security.auth.login.LoginException {

        try {
            super.authenticate();
        } catch(LoginException le){
            SECURITY_LOGGER.error("Authentication failed for " 
                + _username + ". " + le.getMessage());
            throw le;
        }

    }

}


We can now change the verify method to authenticate before calling the 'secure'  findCustomerByAccountNumber method:

    private ProgrammaticLoginInterface programmaticLogin = 
        new ProgrammaticLogin();

    public void verifyCustomer(String accountNumber) {
        Customer customer = null;
        boolean loginSuccessful = programmaticLogin.login("manager", "password", "customer-realm", true);  

        if (loginSuccessful) { 
            customer = customerService.findCustomerByAccountNumber(String accountNumber);
        } else {
            // throw exception 
        }

    }


The call to the ProgrammaticLogin instance attempts to use the supplied name and password directly to login to the current realm. If successful, a security context is created for that user and is used by the EJB when checking what roles are allowed to call it.

For example purposes, the above verifyCustomer method has the name and password hard coded but in reality these values could be obtained from a login web page or other such authentication mechanisms.

Thursday 31 March 2011

Portable Global JNDI Names and Maven

With the advent of the EJB 3.1 specification, the JNDI names for session beans have become portable via the java:global namespace.

As described on the Glassfish EJB FAQ page (http://glassfish.java.net/javaee5/ejb/EJB_FAQ.html#What_is_the_syntax_for_portable_global_) the syntax for the global namespace is:

java:global[/<app-name>]/<module-name>/<bean-name>

where the app-name is the ear name (if the app is deployed as an ear), the module-name the ejb jar or war name and the bean-name the session bean name. Note that names are unqualified and minus the extension.

Given the syntax an example of using the portable JNDI name using the @EJB annotation would be:

@EJB(lookup=
    "java:global/customer-ear/customer-1-0-0/CustomerServiceBean")

The JNDI name may now be portable but the above code would not be reusable if the customer jar name changed. It is quite often the case that jars will have versions in their name eg customer-1-0-0.jar and you would not want to change your code everytime there was a new release of the customer jar so you need to change the module name in the lookup.

One option would be to rename the jar when building the ear file so the module name defaults to the jar name. To do this in Maven would be to explicitly add the ejbModule to the ear plugin as shown below. The name of the jar is what is specified in the bundleFileName tag.

    <plugin>
        <artifactId>maven-ear-plugin</artifactId>
        <version>2.4</version>
        <configuration>
            <generateApplicationXml>true</generateApplicationXml>
            <archive>
                <manifest>
                    <addClasspath>true</addClasspath>
                </manifest>
            </archive>
            <version>6</version>
            <defaultLibBundleDir>lib</defaultLibBundleDir>
            <earSourceDirectory>resources</earSourceDirectory>
            <modules>
                <ejbModule>
                    <groupId>com.mybank.customer</groupId>
                    <artifactId>customer-server</artifactId>
                    <bundleFileName>customer.jar</bundleFileName>
                </ejbModule>
            </modules>
        </configuration>
    </plugin>


The lookup would then be as below:

@EJB(lookup=
    "java:global/customer-ear/customer/CustomerServiceBean")

This means the code containing the injection point doesn't have to change with the names of the jars.

Java Collections and Threading

In the java.util.concurrent package, there exist several collection classes which aid the development of thread safe code. Below is a brief overview of the five most useful classes and a comparison of them with the some collection that exist in the java.util package.


ConcurrentHashMap<K,V>

java.util.HashMap is a collection which holds key value pairs but it is not synchronised. If multiple threads are accessing such a collection, the keys and values being put into the collection may not be visible to those threads calling get. A solution would be to create a synchronised version by calling the static method Collections.synchronizedMap() but this would 'lock' all operations on the Map.

Another solution would be to use ConcurrentHashMap<K,V>. The idea behind this class is that only the bucket that holds the data being accessed is 'locked'. This allows (more often than not) the get operation to work without blocking. Note that using an iterator obtained from the ConcurrentHashMap will not reflect map modifications since getting the iterator. Therefore the iterator will not throw ConcurrentModificationException.


ConcurrentSkipListMap<K,V>

java.util.concurrent.ConcurrentSkipListMap offers similar functionality to the above class but it maintains a sorted order (either the natural order or the order based on using a Comparator.) It also provides methods like ceilingKey, lowerEntry and tailMap. Because of the ordering, insertion can be slower than that of a ConcurrentHashMap but iterating over the collection would be faster.


ConcurrentSkipListSet<E>

Similar to the above but is a NavigableSet.


CopyOnWriteArrayList<E>

java.util.ArrayList is a collection which holds a resizable list of entries but it is not synchronised. As with ConcurrentHashMap, a solution would be to create a synchronised version by calling the static method Collections.synchronizedList() but this would 'lock' all operations on the List. Another solution would be to use CopyOnWriteArrayList<E> and this is best suited to lists where reads are frequent and writes are infrequent. It works by making a change to a copy of the list and then replacing the existing list with the modified copy..


CopyOnWriteArraySet<E>

Similar to the above but is a Set.

Friday 25 March 2011

Thread-safe and lock free variables

The java.util.concurrent.atomic package contains classes which aim to ease concurrency concerns in multithreaded applications. They offer better performance than using synchronisation because its operations on fields do not require locking.

The classes AtomicBoolean, AtomicInteger, AtomicLong and AtomicReference enable the primitives boolean, int, long and an object reference to be atomically updated. They all provide utility methods like compareAndSet which takes an expected value and the update value and returns false if the expected value doesn't equal (==) the current value. The AtomicInteger and AtomicLong classes also provide atomic pre and post decrement/increment methods like getAndIncrement or incrementAndGet.

An example use of one of the 'scalar' atomic classes is shown below:

public class IdentifierGenerator {

    private AtomicLong seed = new AtomicLong(0);

    public long getNextIdentifier() {
        return seed.getAndIncrement();
    }

} 


The int and long primitives, and also object references can be held in arrays where the elements are atomically updated namely AtomicIntegerArray, AtomicLongArray and AtomicReferenceArray.

There also exists field updater classes namely AtomicIntegerFieldUpdater, AtomicLongFieldUpdater and AtomicReferenceFieldUpdater. With these classes you can still use the methods on the class but use the updater class to manage the field that requires atomic access via methods like compareAndSet. Note that because of this, atomcity isn't guaranteed.

These are abstract classes and the below (contrived!) example shows how an instance would be created:

    public void setBalance(Customer customer, int exisitngBalance, 
        int newBalance) {

        AtomicIntegerFieldUpdater<Customer> balanceAccessor 
            = AtomicIntegerFieldUpdater.newUpdater(
        Customer.class, "balance"); 
        balanceAccessor.compareAndSet(customer, exisitngBalance, newBalance);

    }

Thursday 24 March 2011

Weak and Soft References and the Garbage Collector

Java uses the Garbage Collector to manage memory. It does this by freeing up memory no longer needed by the application ie memory used by objects that are not referenced anymore, in other words, objects that are ‘unreachable’.

Typically, objects are strongly referenced so they remain in memory until no longer reachable,

Customer customer = new Customer();

but there are certain scenarios where you’d want to keep references to objects and also allow them to be garbage collected if needs be eg when memory is at a premium. A solution to this problem is to use Reference Objects.

There are three concrete implementations of the java.lang.ref.Reference<T> class: WeakReference, SoftReference and PhantomReference.

Objects held only by a WeakReference are deemed to be weakly reachable and are therefore eligible for garbage collection. An example of creating a WeakReference is below:

WeakReference<Customer> weakCustomer = new
    WeakReference<Customer>(customer);

To obtain the customer object held by the weak reference, you would call weakCustomer.get(). This will always return the customer object unless the garbage collecter has deemed the object to be weakly reachable and therefore has freed it from memory. In that case, calling the get method will return null;

If you want to hold a collection of the objects that can be garbage collected, then you can use a WeakHashMap. This collection uses weak references as keys, so when the key becomes garbage collected, the entry for that key in the WeakHashMap gets removed. This collection class isn’t synchronized so use Collections.synchronizedMap for a synchronized version, but be beware that whether synchronized or not, the size of the Map may decrease over time as the garbage collector removes entries, so an iterator may return ConcurrentModificationException.

A SoftReference holds onto the object more strongly than a WeakReference and if memory is not a problem the garbage collector will not free up the memory used by objects held by a soft reference.

You can also create another type of reference to an object, a PhantomReference. Calling the get method on such a reference will always return null though thereby preventing you from ‘resurrecting’ the object.  It’s only real use is when you pass a ReferenceQueue into the PhantomReference constructor.

You can pass a ReferenceQueue into the constructor of all the Reference types. It enables you to keep track of objects which have been garbage collected. When the object’s finalize method is called, the object will be placed on the reference queue. This process is called ‘enqueuing’. You then know when it was removed from memory.

In the case of a PhantomReference, passing in a null ReferenceQueue would be a pointless exercise, rendering the creation of the PhantomReference useless.


Tuesday 8 March 2011

Testing JPA Entities using DBUnit

DBUnit is a powerful testing tool that allows you to put your database into a known state in between tests. A common problem (and one of poor test design) is where a test changes the state of an object and this state affects the testing of objects in later tests. Each test should be independent and tests within a suite should be able to run in any order. This post will aim to describe the code needed to setup a database before each run of a JUnit test method.

The below class has three key methods:
  • initEntityManager() - this method is annotated with @BeforeClass so will only be called once for this test and called before any tests are run. The method will create an entity manager, get a connection to an instance of the in memory Java database HSQL and then populate it by reading in the flat xml database record structure that is contained in the file test-dataset.xml. For alternative dataset formats please visit http://www.dbunit.org/apidocs/org/dbunit/dataset/IDataSet.html
  • closeEntityManager() - this method is annotated with @AfterClass so will only be called once for this test and called after all the tests have been run. The method will close down the entity manager and the entity manager connection factory.
  • cleanDB() - this method is annotated with @Before and will be called before every test method call. The call to  DatabaseOperation.CLEAN_INSERT.execute will clean the database tables and insert the records from the dataset xml file. Database operations other than CLEAN_INSERT are described below:
            UPDATE        updates the database using the data in the dataset.
            INSERT         inserts the data in the dataset into the database.
            DELETE         deletes only the data in the dataset from the database.
            DELETE_ALL deletes all rows of tables present in the specified dataset.
            TRUNCATE   truncate tables listed in the dataset.
            REFRESH     refreshes the database using the data in the dataset.
            NONE           does what it says, nothing.

import org.dbunit.database.DatabaseConfig;
import org.dbunit.database.DatabaseConnection;
import org.dbunit.database.IDatabaseConnection;
import org.dbunit.dataset.IDataSet;
import org.dbunit.dataset.xml.FlatXmlDataSetBuilder;
import org.dbunit.ext.hsqldb.HsqldbDataTypeFactory;
import org.dbunit.operation.DatabaseOperation;
import org.hibernate.impl.SessionImpl;
import org.junit.AfterClass;
import org.junit.Before;
import org.junit.BeforeClass;
import javax.persistence.EntityManager;
import javax.persistence.EntityManagerFactory;
import javax.persistence.Persistence;
import org.junit.Test;

public class JPATest {

    protected static EntityManagerFactory entityManagerFactory;
    protected static EntityManager entityManager;
    protected static IDatabaseConnection connection;
    protected static IDataSet dataset;

    @BeforeClass
    public static void initEntityManager() throws Exception {
        entityManagerFactory = Persistence.createEntityManagerFactory("PersistenceUnit");
        entityManager = entityManagerFactory.createEntityManager();
        connection = new DatabaseConnection(((SessionImpl)(entityManager.getDelegate())).connection());
        connection.getConfig().setProperty(DatabaseConfig.PROPERTY_DATATYPE_FACTORY, new HsqldbDataTypeFactory());

        FlatXmlDataSetBuilder flatXmlDataSetBuilder = new FlatXmlDataSetBuilder();
        flatXmlDataSetBuilder.setColumnSensing(true);
        dataset = flatXmlDataSetBuilder.build(
        Thread.currentThread().getContextClassLoader().getResourceAsStream("test-dataset.xml"));
    }

    @AfterClass
    public static void closeEntityManager() {
        entityManager.close();
        entityManagerFactory.close();
    }

    @Before
    public void cleanDB() throws Exception {
        DatabaseOperation.CLEAN_INSERT.execute(connection, dataset);
    }

    @Test
    public void testAUsefulMethod() throws Exception {
        // .... test code
    }

}


An example structure of the dataset.xml is shown below:

<?xml version='1.0' encoding='UTF-8'?>
<dataset>
    <customer id="1" surname="SURNAME" firstName="FIRSTNAME" middleName="MIDDLENAME" personalAccount="true"/>
    <customer id="2" surname="SURNAME" firstName="FIRSTNAME" personalAccount="true" balance="100000.00/>
</dataset>


One point of note is that when deleting database records using DBUnit the same rules apply as if you were using SQL. You will not be able to delete a record if its primary key is a foreign key on another table. An example in Flat XML for a Customer class that has many Addresses would be:

<?xml version='1.0' encoding='UTF-8'?>
<dataset>
    <customer/>
    <address/>
    <customer_address/>
</dataset>


Finally, there are many useful methods on IDataSet and ITable interfaces. An example being to obtain the number of records of a particular table:

int customerRecordCount = dataset.getTable("Customer").getRowCount();

Monday 7 March 2011

Useful JMS-related Glassfish and OpenMq commands

Below are a few Glassfish JMS admin commands and some useful OpenMQ commands (to query, purge and list) which can be be used in scripts to perform tasks as opposed to performing the same procedures via the Glassfish or OpenMQ admin consoles:


Glassfish JMS Commands

To clear previous JMS setup

asadmin delete-jms-resource <JMS_TOPIC_RESOURCE_NAME>
asadmin delete-jms-resource <JMS_TOPIC_CONNECTION_FACTORY_NAME>


To create the JMS Topic Connection factory

asadmin create-jms-resource --restype=javax.jms.TopicConnectionFactory --property transaction-support=LocalTransaction --description="JMS Topic Connection Factory." <JMS_TOPIC_CONNECTION_FACTORY_NAME>


To create the JMS Topic resource

asadmin create-jms-resource --restype=javax.jms.Topic --description="JMS Topic" <JMS_TOPIC_RESOURCE_NAME>


To list JMS resources

asadmin list-jms-resources


OpenMQ Commands

Query the JMS resources

imqcmd query bkr  -b <host>:<port> -passfile <password file> -u <user name>

Purge the Topic (or Queue)

imqcmd purge dst -f -passfile <password file> -n <topic name> -t t -b <host>:<port> -u <user name>

List the message rate and packet flow

imqcmd list dst -passfile <password file> -b <host>:<port> -u <user name>

An example output from the list command would be:

Listing all the destinations on the broker specified by:

-------------------------
Host         Primary Port
-------------------------
localhost    7676

-----------------------------------------------------------------------------------------------------------------
             Name                Type    State      Producers        Consumers                  Msgs             
                                                 Total  Wildcard  Total  Wildcard  Count  Remote  UnAck  Avg Size
-----------------------------------------------------------------------------------------------------------------
JMSTopic                         Topic  RUNNING  0      0         1      0         0      0       0      0.0
mq.sys.dmq                       Queue  RUNNING  0      -         0      -         171    0       0      5273.322

Thursday 3 March 2011

Pessimistic and Optimistic Locking in JPA 2.0

A most welcome addition to JPA 2.0 was the introduction of pessimistic locking. This allows the entity manager (or query) to lock a database record thereby preventing other transactions from changing the same record. This will ensure data consistency but at a performance cost so for each entity you need to ask yourself what is the chance of contention?

Let's look at the two approaches in more detail:

Optimistic Locking

Optimistic locking is the preferred approach when modifying entities that are infrequently updated. Optimistic locking can be explicit as will be shown later or implicit by using a version attribute. A version attribute is an attribute which has been annotated with @Version. This attribute will then get incremented when the transaction commits. The below class shows an example of its usage:

import java.io.Serializable;
import javax.persistence.Column;
import javax.persistence.EntityManager;
import javax.persistence.Id;
import javax.persistence.MappedSuperclass;
import javax.persistence.PersistenceContext;
import javax.persistence.Version;
 

@MappedSuperclass
public abstract class BaseEntity implements Serializable {

    private static final long serialVersionUID = 1L;
    private static final int ID_LENGTH=36;

    @Id
    @Column(length = ID_LENGTH)
    private String id;
    @Version
    private int versionNumber;

    // etc ...

}
 
There are two optimistic lock modes:

  • OPTIMISTIC or READ - locks the entity when the transaction reads it for entities wit a version
  • OPTIMISTIC_FORCE_INCREMENT or WRITE - locks the entity when the transaction reads it for entities with a version and increments the version attribute

Pessimistic Locking

Pessimistic locking can be applied to all entities regardless of whether they have a version attribute or not.
 
There are three pessimistic lock modes:

  • PESSIMISTIC_READ - locks the entity when the transaction reads it and allows reads from other transactions
  • PESSIMISTIC_WRITE - locks the entity when the transaction updates it but does not allow reads, updates or deletes from other transactions
  • PESSIMISTIC_FORCE_INCREMENT - locks the entity when the transaction reads it and increments the version attribute (if present)

There are many different ways to implement a pessimistic or optimistic lock using the Entity Manager and Query interfaces as shown in the example classes below:

import java.io.Serializable;
import java.util.List;
import javax.enterprise.context.ApplicationScoped;
import javax.inject.Named;
import javax.persistence.EntityManager;
import javax.persistence.LockModeType;
import javax.persistence.PersistenceContext;
import javax.persistence.Query;


@Named("customerManager")
@ApplicationScoped
public class CustomerManager implements Serializable {


    private static final long serialVersionUID = 1L;

    @PersistenceContext
    private EntityManager entityManager;


    public void setName(String id, String name) {

        Customer customer = 

            entityManager.find(Customer.class, id);
        entityManager.lock(customer,

            LockModeType.OPTIMISTIC_FORCE_INCREMENT);
        customer.setName(name);
    }


    public Customer findCustomer(String id) {

        return entityManager.find(Customer.class, id, 

            LockModeType.OPTIMISTIC);
    }


    public Customer 

        applyBankCharges(String id, Double charges) {
 
        Customer customer = 

            entityManager.find(Customer.class, id);
        customer.setBalance(customer.getBalance() - charges);
        if (customer.getBalance() < 0.0) {
            // overwrite changes and return the refreshed 

            // and locked object
            entityManager.refresh(customer, 

                LockModeType.PESSIMISTIC_WRITE);
        }
        return customer;
    }


    public List<Customer> 

        findCustomerByPartialName(String partialName) {
 
        Query query = entityManager.createQuery(
                "SELECT c FROM Customer c WHERE c.name" + 

                " LIKE :partialName").
                setParameter("partialName", partialName);
        query.setLockMode( 

            LockModeType.PESSIMISTIC_FORCE_INCREMENT);
        return query.getResultList();
    }
}



import javax.persistence.Entity;
import javax.persistence.LockModeType;
import javax.persistence.NamedQuery;


@Entity

@NamedQuery(name = "findByNameQuery",
    query = "SELECT c FROM Customer c WHERE c.name LIKE :name",
    lockMode = LockModeType.PESSIMISTIC_FORCE_INCREMENT)
public class Customer extends BaseEntity {


    private String name;

    private Double balance;

    public void setName(String name) {

        this.name = name;
    }
    public Double getBalance() {

        return this.balance;
    }
    public void setBalance(Double balance) {

        this.balance = balance;
    }
}

 

Finally, a word on exceptions. If a PessimisticLockException or an OptimisticLockException are thrown then the transaction is rolled back as they are both RuntimeExceptions.

Thursday 17 February 2011

Generating Web Services from WSDLs using Maven and deploying to Glassfish

This blog post aims to cover generating Java classes from WSDLs using Maven and it also covers a problem with web annotations when deploying to an app server.

There are many different ways to generate Java classes for a given wsdl file (and associated xsds). One of those ways is to use the JAX-WS wsimport tool. Amongst the classes that the tool can generate are the service endpoint interface and the service class.

When using Maven, you can use the jaxws-maven-plugin and the wsimport goal. The plugin will read a WSDL file (from the /src/wsdl directory unless otherwise specified via the wsdlLocation tag) and generate the Java classes. The plugins sections from a pom.xml is shown below:


       <plugins>
            <plugin>
                <groupId>org.codehaus.mojo</groupId>
                <artifactId>jaxws-maven-plugin</artifactId>
                <version>1.10</version>
                <executions>
                    <execution>
                        <goals>
                            <goal>wsimport</goal>
                        </goals>
                        <configuration>
                            <wsdlFiles>
                                <wsdlFile>Customer.wsdl</wsdlFile>
                            </wsdlFiles>
                            <staleFile>

             ${project.build.directory}/jaxws/stale/Customer.stale
                            </staleFile>
                        </configuration>
                        <id>wsimport-generate-Customer</id>
                        <phase>generate-sources</phase>
                    </execution>
                </executions>
                <dependencies>
                    <dependency>
                        <groupId>javax.xml</groupId>
                        <artifactId>webservices-api</artifactId>
                        <version>${javax.xml.version}</version>
                    </dependency>
                    <dependency>
                        <groupId>com.sun.xml.bind</groupId>
                        <artifactId>jaxb-xjc</artifactId>
                        <version>${jaxb-xjc.version}</version>
                    </dependency>
                    <dependency>
                        <groupId>com.sun.xml.ws</groupId>
                        <artifactId>jaxws-rt</artifactId>
                        <version>${jaxws-rt.version}</version>
                    </dependency>
                </dependencies>
                <configuration>
                    <sourceDestDir>

       ${project.build.directory}/generated-sources/jaxws-wsimport
                    </sourceDestDir>
                    <xnocompile>true</xnocompile>
                    <verbose>true</verbose>
                    <extension>true</extension>
                    <catalog>

                        ${basedir}/src/jax-ws-catalog.xml
                    </catalog>
                </configuration>
            </plugin>
        </plugins>



The generated web service class will look like the class below:

@WebService(
    name = "Customer", 
    targetNamespace = "http://<package>/Customer")
@SOAPBinding(

    parameterStyle = SOAPBinding.ParameterStyle.BARE)
public interface Customer {

    /**
     *
     * @param body
     */
    @WebMethod(

        operationName = "AddCustomer", 
        action = "http://<package>/Customer/AddCustomer")
    public void addCustomer(
        @WebParam(name = "AddCustomer", 

            targetNamespace = "http://<package>/Customer",
            partName = "body") AddCustomer body);
}



Given the above interface, a class can be written to implement the above generated interface as shown below:

@WebService(name = Customer.NAME,
            targetNamespace = Customer.TARGET_NAMESPACE,
            portName = Customer.PORT_NAME,
            serviceName = Customer.SERVICE_NAME,
            wsdlLocation = Customer.WSDL_LOCATION_URL)
@SOAPBinding(parameterStyle = SOAPBinding.ParameterStyle.BARE)
@BindingType(value = Customer.SOAP12_OVER_HTTP)
@SchemaValidation()
public class CustomerWebService implements Customer {

    static final String NAME = "Customer";
    static final String TARGET_NAMESPACE 
        = TARGET_NAMESPACE_URL + NAME;
    static final String PORT_NAME = NAME + "SOAP";
    static final String SERVICE_NAME = NAME;
    static final String WSDL_LOCATION_URL 
        = WSDL_LOCATION_PATH + NAME + ".wsdl";
    static final String SOAP12_OVER_HTTP 
        = "http://java.sun.com/xml/ns/jaxws/2003/05/soap/bindings/HTTP/";

    @Override
    @WebMethod(operationName = "AddCustomer")
    public void addCustomer(
        @WebParam(name = "AddCustomer", 
            targetNamespace = "http://<package>/Customer",
            partName = "body") AddCustomer body) {

        // do something
    }

}


Having built an ear file containing the web service, you can then deploy it. To deploy to eg Glassfish v3.0.1, you can use the following command:

asadmin --host <server> --port <port> --interactive=false --echo=true --terse=true deploy
--name customer-ear --force=false --precompilejsp=false --verify=false --enabled=true --generatermistubs=false --availabilityenabled=false
--keepreposdir=false --keepfailedstubs=false --logReportedErrors=true --help=false ./customer-ear.ear


One point to note is that the below implementation of the addCustomer method would compile...

    @Override
    public void addCustomer(AddCustomer body) {

        // do something
    }

... but it would not inherit the Web annotations and would result in the below error when deploying the ear:

com.sun.enterprise.admin.cli.CommandException: remote failure: Exception while loading the app : java.lang.Exception: java.lang.IllegalStateException: ContainerBase.addChild: start: org.apache.catalina.LifecycleException: org.apache.catalina.LifecycleException: javax.servlet.ServletException
Exception while invoking class com.sun.enterprise.web.WebApplication start method : java.lang.Exception: java.lang.IllegalStateException: ContainerBase.addChild: start: org.apache.catalina.LifecycleException: org.apache.catalina.LifecycleException: javax.servlet.ServletException

The Web annotations need to be explicitly replicated in the concrete class.