Retrieve deployed application information using JMX

Every now and then you get to do things you’ve never done before, which gets you a little bit out of your comfort zone. In the past I wasn’t eager going out of my comfort zone, but now it makes me enjoy my job more and it presents some nice challenges.
At my client, I got a request that seems quite easy to solve, but I didn’t have much experience with the solution.

They have a portal that links to all applications a specific user has access to. The general application and access information is fetched from a database.
Now the user wants to be able to see the deployed version of each application. Most applications are deployed on an Oracle WebLogic application server.

You can interrogate the state of the application server through JMX, Java Management Extensions. Through standards-based interfaces that the application server provides, which are called MBeans, you can monitor the application server, the configuration of a domain and the available services and applications. By calling some of the the MBean’s methods, you can also change its configuration or deploy applications.

The WebLogic Server MBean Data Model is a hierarchical model. It is structured according to the XML document structure that is used for the server’s configuration. You can navigate this MBean hierarchy by getting attributes of a specific MBean. Each MBean is defined by its ObjectName. Without going into too much detail, the ObjectName is the name with which an MBean is registered in the MBean server. You can find more information about ObjectNames on this link: WebLogic Server MBean Object Names.

I used specific MBeans to retrieve and get detailed information of all deployed applications in the WebLogic Server. I’ll show you the code.

...
private static final String RUNTIME_MBEAN_SERVER_JNDI_NAME 
       = "java:comp/env/jmx/runtime";
...
private static MBeanServer getMBeanServer() {
    MBeanServer mBeanServer = null;

    try {
        InitialContext initialContext = new InitialContext();
        mBeanServer = 
               (MBeanServer) initialContext.lookup(RUNTIME_MBEAN_SERVER_JNDI_NAME);
    } catch (NamingException e) {
        LOGGER.error("Error connecting to the MBean server", e);
    }

    return mBeanServer;
}

This small piece of code above fetches the MBean server using the JNDI name of the WebLogic’s Runtime MBean Server. It’s pretty straightforward.

public static Map<String, String> getDeployedApplications() {
    Map<String, String> deployedApplications = 
           new HashMap<String, String>();

    try {
        MBeanServer mBeanServer = getMBeanServer();
        ObjectName domainConfiguration =
               (ObjectName) mBeanServer.getAttribute(
                      new ObjectName(RuntimeServiceMBean.OBJECT_NAME), 
                      "DomainConfiguration");
        ObjectName[] appDeployments = 
               (ObjectName[]) mBeanServer.getAttribute(
                      domainConfiguration, 
                      "AppDeployments");
        for (ObjectName appDeployment : appDeployments) {
            try {
                Object applicationName = 
                       mBeanServer.getAttribute(
                              appDeployment, 
                              "ApplicationName");
                Object versionIdentifier = 
                       mBeanServer.getAttribute(
                              appDeployment, 
                              "VersionIdentifier");
                if (versionIdentifier != null) {
                    deployedApplications.put(
                           applicationName.toString(), 
                           versionIdentifier.toString());
                }
            } catch (Exception e) {
                LOGGER.error(String.format("Error fetching deployment information for '%s'", 
                       appDeployment), e);
            }
        }
    } catch (Exception e) {
        LOGGER.error("Error fetching deployed applications", e);
    }

    return Collections.unmodifiableMap(deployedApplications);
}

Here I navigate the MBean hierarchy depicted in the The WebLogic Server® MBean Reference to the AppDeploymentMBean. I quote:

This MBean is used to configure all physical package types which can be deployed on a WebLogic domain, for instance, EAR files and standalone Java EE and non-Java EE modules.

On the AppDeploymentMBean reference you see that one of the access points is through the DomainMBean.AppDeployments attribute. Always choose the shortest and easiest path. When you click on the link, you go to the DomainMBean reference. On that page, you see that one of the access points is the RuntimeServiceMBean.DomainConfiguration attribute. The RuntimeServiceMBean can be found directly under the MBeanServer Services. From the AppDeploymentMBean you can then retrieve the ApplicationName and VersionIdentifier attributes of the deployed application.

I hope I could help you with your problem or that you gained some new interesting information. If you have questions or a suggestion, please comment underneath this post.

You can find more information about WebLogic (12.1.2) MBeans on the following links:

Posted in Java, WebLogic | Tagged , , , , , , , , , , , | 2 Comments

Console2 installation and configuration

At my client, I got to know Console (also called Console2) – A Windows console window enhancement.

In this post I’ll describe what the tool is about, how you can install and configure it and how to add it to your “Open command window here” context menu.
This information handles a Windows 7 environment.

Taken from the Console website itself:

Console is a Windows console window enhancement. Console features include: multiple tabs, text editor-like text selection, different background types, alpha and color-key transparency, configurable font, different window styles

What I like the most about Console is the possibility to resize your console window on-the-fly and the multiple tabs. You can download the latest released or development version of Console on this link.

I have configured Console this way:

  • Right-click in the Console window and choose “Edit -> Settings…” or press CTRL-S.
  • In the Console tab, change the “Buffer size”. Set “Rows” to at least 500 and “Columns” to at least 200.
  • In the Appearance tab, you can choose whatever font and font-size you like. Also, you can choose a font color. In my case, I chose a bright green color.
  • Under Appearance -> More… hide the menu, toolbar and status bar. You can leave the rest by default.
  • In the Behavior tab, check “Don’t wrap long lines”.
  • Under Hotkeys, we’ll change some default key assignments:
    • Change the “New Tab 1″ hotkey to CTRL-T. Click on the hotkey, enter the hotkeys you want in the textbox and click the “Assign” button to save the new hotkey assignment.
    • Set “Copy selection” to CTRL-X (Keep CTRL-C for aborting a running batch or application).
    • The same way, set “Paste” to CTRL-V.
  • In the Hotkeys -> Mouse tab, change the following assignments:
    • Set “Copy/Clear selection” to “Left”.
    • Set “Select text” to “Left + Shift”.
    • Set “Paste text” to “Middle”.
  • Lastly, make sure “Save settings to user directory” is checked and click the “OK” button.

For a quick setup, you can download my user settings right here. Right-click the link, choose “Save as…” and save it to your Console user directory.
(C:\Users\<username>\AppData\Roaming\Console): console.xml

Now, you have a shiny new console, but when you “SHIFT-Right Click” a folder to choose “Open command window here”, the old console window still appears. This can be changed using the registry.
Call “regedit” from the Start menu (You’ll need Administrator privileges for this). Find the registry key “HKEY_CLASSES_ROOT\Directory\shell\cmd\command” and modify the “(Default)” value to “<Console-installation-folder>\Console.exe -d %1″ (excluding the quotes), for example “C:\dvl\tools\Console2\Console.exe -d %1″.
When you close “regedit” and try “Open command window here” again, you’ll see your shiny new console appearing.

Console2

Beware, installing Console and making all these adjustments will NOT change your CMD command. If you call CMD from within the Start menu, the old console will still be called!

I hope you’ll enjoy using your new console and that it may improve your developer productivity.

Posted in Tools | Tagged , , , | 1 Comment

A new year: new changes, new challenges

The past few months have been filled with changes:
I decided to take a different path in my career. I changed employer to further improve my professional life. I changed customers to learn new stuff and do the things I love to do.
I also decided to write more frequently on my blog and want to focus more on application development.

This new year brings a whole new challenge.
I started working at a new customer in January. I’ll support my new team and help them maintain and extend existing applications.
I just started working there, so of course I don’t have the full picture yet but the applications concern treasury services and were built using the Flex SDK, among others. I don’t have previous experience with Flex, so it will be challenging and rewarding to get grasp of this technology and it’s features.

This is why I started working in IT: continuously challenge myself, create new things, improve that what has already been built, see applications come to live and learn new things along the way.

This year will bring a new learning experience for me and I hope I’ll be able to keep you informed about the things I learn, the changes in the Java ecospace and that I can continue to share my knowledge with you and get your feedback in return.

Happy readings and may you have a prosperous 2014 ahead!

Posted in Uncategorized | Tagged | Leave a comment

2013 in review

The WordPress.com stats helper monkeys prepared a 2013 annual report for my blog.

Here’s an excerpt:

The concert hall at the Sydney Opera House holds 2,700 people. This blog was viewed about 15,000 times in 2013. If it were a concert at Sydney Opera House, it would take about 6 sold-out performances for that many people to see it.

Click here to see the complete report.

I thank you very much for following my blog and hope you’ll further enjoy reading it in 2014!

Posted in Uncategorized | Tagged , , | Leave a comment

Devoxx 2013: Java EE 7

At Devoxx 2013, the tracks that I was most interested in were the Java SE and Java EE tracks, specifically in presentations about the new stuff in Java SE 8 and Java EE 7.
In this post, I’ll talk about the new features of Java EE 7.

As I already mentioned in my previous blog post, I attended 2 sessions on Java EE 7 this year. One by Java EE evangelists Arun Gupta and Antonio Goncalves during the University days and one by David Delabassee during the Conference. All of them gave a good overview on the new and updated specifications.

I’ll give you a broad overview on what Java EE 7 is all about and what it can mean for your day-to-day work.

History

First, I’ll explain the history of Java EE to describe where the platform is coming from:

JavaEE_Future

It all started with Java EE 1.2 and 1.3 where the basic specifications were implemented: Servlet, JSP, EJB with CMP in Java EE 1.3 and JMS. Java EE 1.4 focused on web services development. Java EE 1.4 and older weren’t the most popular versions with Java developers. They were difficult, bloated and a lot of boilerplate code and XML configuration was necessary to implement the tiniest feature.
Java EE 5 made a complete 180° turn. Its focus was put on ease of development. The platform had to be easier in use and more fun to develop with. They succeeded!
Annotations were added to enable light-weight configuration of the application, much of the XML configuration was made redundant. EJB 3.0 was created, a much improved version of EJB 2.x. That specification was also used as a basis for new web services development. To simplify persistence, JPA emerged, a standardized version of popular ORM frameworks.
Java EE 6 made the platform even more configurable by introducting CDI, a standardized Contexts and Dependency Injection API. It fully implemented Restful Web Services and created a Java EE 6 Web Profile to enable light-weight fully managed web application development.

Because an image says more than a thousand words, Java EE 7 is all about these 3 big themes:

javaee7-theme

I’ll discuss these themes when I talk about the different specifications.

Specifications

Java EE 7 includes many updated and some new specifications:

javaee7-pancake

CDI 1.1

CDI is now integrated into most Java EE specifications like JPA, EJB, Bean Validation, EventListeners etc. In the past, some specifications like EJB 3.x had their own dependency injection mechanism. In EJB 3.x you would use the @EJB annotation to inject EJB’s. Now you can also use CDI for that.

CDI is enabled by default. A beans.xml file is not necessary anymore, unless you want to override the new bean-discovery-mode option or other convention over configuration. The bean-discovery-mode option describes which beans will be discovered and injected automatically. You can specify the following values:

  • all: All types in the archive will be considered for discovery and injection.
  • annotated: Only types with bean defining annotations in the archive will be considered.
  • none: No beans in the archive will be considered.

You can use a new annotation: @Vetoed. Beans annotated with @Vetoed will not be discovered nor injected, regardless of the bean-discovery-mode configuration. It can be enabled on a per class basis, or for an entire package by annotating a package-info.java file.

Bean Validation 1.1

Bean Validation is integrated into more Java EE specifications like JAX-RS and JAXB. It’s not yet integrated into the SOAP API’s. You can also use Bean Validation in Java SE if you include the necessary Bean Validation provider on the classpath.

Bean Validation supports standardized method-level validation. You annotate pre- and post conditions on constructors, methods and method parameters. This promotes “programming by contract”.

@AssertTrue
public boolean validate (@NotNull CreditCard creditCard) {
    // Details ommitted
}

Talking about a cohesive integrated platform, CDI can now be used in Validator classes to inject dependencies.

Interceptors 1.2

Interceptors are also integrated into most Java EE specifications, except in the SOAP API’s.

New is the fact that you can associate interceptors with the instantiation of a class. You probably already know that the @AroundInvoke annotation exists. Now you can annotate methods with @AroundConstruct to perform certain functionality before and after a new instance of a class is created.

@AroundConstruct
public void validateConstructor(InvocationContext context) {
    System.out.println(
        "MyAroundConstructInterceptor.validateConstructor");
}

Interceptor ordering is supported. You can define interceptor priority using the @Priority annotation. Use the following values in descending priority order and add your own integer value to them to order your interceptors: PLATFORM_BEFORE, LIBRARY_BEFORE, APPLICATION, LIBRARY_AFTER and PLATFORM_AFTER

Concurrency Utilities 1.0

In the past, creating your own threads in a Java EE managed application was prohibited. The container took care of concurrency. You shouldn’t intervene in that or you could open the box of concurrency pandora.
Now Java EE 7 supports creating container managed threads yourself. You can use a ManagedExecutorService, ManagedScheduledExecutorService or ManagedThreadFactory for that. The executor or factory needs a Runnable or Callable instance as a parameter.

@Resource(name = "DefaultManagedExecutorService")
ManagedExecutorService executor;

public boolean doSomething() throws InterruptedException {
    TestStatus.latch = new CountDownLatch(1);
    executor.submit(new Runnable() {
        @Override
        public void run() {
            // Details ommitted
        }
    });
    TestStatus.latch.await(2000, TimeUnit.MILLISECONDS);
    return true;
}

JPA 2.1

Schema generation has been standardized in Java EE 7. You can set the javax.persistence.schema-generation.database.action option in persistence.xml. Valid values are none, create, drop-and-create and drop. A validate or update setting doesn’t seem to exist (yet). Other options like loading data using SQL scripts are also standardized.

You can define additional indexes for schema generation in your code using the @Index annotation.

@Entity
@Table(indexes = {
        @Index(columnList = "ISBN"),
        @Index(columnList = "NBOFPAGE")
})
public class Book {
    // Details ommitted
}

An unsynchronized persistence context is available now. When you use this type of persistence context, your changes are not flushed to the database until joinTransaction() is invoked. This way, you have more control of when the flush to the database occurs.

@PersistenceContext(
    synchronization = SynchronizationType.UNSYNCHRONIZED)
EntityManager em;

public void persistWithoutJoin(Employee e) {
    em.persist(e);
}

public void persistWithJoin(Employee e) {
    em.joinTransaction();
    em.persist(e);
}

Stored procedures can be specified in a standardized way using the @NamedStoredProcedureQuery annotation.

@NamedStoredProcedureQuery(name="PersonStoredProcedure",
    procedureName="PERSON_SP")

In the presentation, no code was shown on how to call the stored procedure as far as I remember. In his examples on GitHub, Arun mentions “TBD: how to invoke StoredProcedure”. But on http://www.mastertheboss.com/quickstart-tutorials-hibernate/jpa/jpa-21-tutorial I found some code on how to call a named stored procedure that needs parameters.

StoredProcedureQuery spq = EntityManager
    .createNamedStoredProcedureQuery("PersonStoredProcedure");
spq.registerStoredProcedureParameter(1, String.class,
    ParameterMode.INOUT);
spq.setParameter(1, "FRANK");
spq.registerStoredProcedureParameter(2, Integer.class,
    ParameterMode.IN);
spq.setParameter(2, 100);

spq.execute();
String response = spq.getOutputParameterValue(1);

JTA 1.2

Transactional services that were used in EJB’s are now extracted to the JTA specification.
You can define transaction management on managed beans as CDI interceptor binding using the @Transactional annotation which seems to be coming straight out of Spring. You can specify on which exceptions the transaction should be rolled back, and which exceptions shouldn’t cause a rollback.

@Transactional(value = Transactional.TxType.REQUIRED,
    rollbackOn = {SQLException.class, JMSException.class},
    dontRollbackOn = SQLWarning.class)
public class BookRestService {
    // Details ommitted
}

You can also define a managed bean as transaction scoped. The bean will only live during the scope of one transaction. Just annotate the bean with @TransactionScoped.

EJB 3.2

As discussed above, a lot of EJB specific services like EJB dependency injection and transaction management, were extracted to other existing specifications.

EJB 3.2 contains numerous updates and improvements.
Now lifecycle callback methods can opt-in to be transactional whereas before transaction management was mostly ignored on lifecycle callbacks.

@Stateful
public class HelloBean {
    @PersistenceContext(type=PersistenceContextType.EXTENDED)
    private EntityManager em;

    @TransactionAttribute(
                TransactionAttributeType.REQUIRES_NEW)
    @PostConstruct
    public void init() {
        myEntity = em.find(...);
    }

    @TransactionAttribute(
                TransactionAttributeType.REQUIRES_NEW)
    @PostConstruct
    public void destroy() {
        em.flush();
    }
}

You can disable the passivation of stateful beans using the annotation @Stateful(passivationCapable = false). In some cases this can increase performance, scalability and robustness.

EJB 3.x Lite is a light-weight version of the full EJB 3.x API.
It supports EJB development, but doesn’t contain MDB’s, remoting or JAX-WS / JAX-RPC service endpoints. EJB 3.1 Lite didn’t contain asynchronous session beans nor timer services.
Now in EJB 3.2 Lite, local asynchronous invocations and a non-persistent EJB Timer Service are included. You can define an asynchronous method by annotating it with @Asynchronous. Timer Service methods can be scheduled using the annotation @Scheduled. You can use some kind of cron expression to define the scheduling.

@Schedule(hour = "*", minute = "*", second = "*/5",
    info = "Every 5 second timer")
public void printDate() {
    // Details ommitted
}

JMS 2.0

Java EE 7 brings you a complete JMS API overhaul. The programming model is extremely simplified.

The JMSContext API now supports a Builder Pattern style of programming.
Several JMS interfaces implement AutoCloseable so they can be used in try-with-resources blocks. And you can now define JMS Connection Factories and Destinations like queues and topics with annotations. You can find the comparison between the JMS 1.1 and JMS 2.0 API’s below.

public void sendMessageJMS11(
        ConnectionFactory connectionFactory,
        Queue queue, String text) {
    try {
        Connection connection =
            connectionFactory.createConnection();
        try {
            Session session = connection.createSession(
                false,Session.AUTO_ACKNOWLEDGE);
            MessageProducer messageProducer =
                session.createProducer(queue);
            TextMessage textMessage =
                session.createTextMessage(text);
            messageProducer.send(textMessage);
        } finally {
            connection.close();
        }
    } catch (JMSException ex) {
        // handle exception (details omitted)
    }
}

 

@Inject
JMSContext context;

@Resource(mappedName =
    Resources.SYNC_CONTAINER_MANAGED_QUEUE)
Queue queue;

public void sendMessageJMS20(String text) {
    try (JMSContext context =
            connectionFactory.createContext();) {
        context.createProducer().send(queue, text);
    } catch (JMSRuntimeException ex) {
        // handle exception (details omitted)
    }
}

 

@JMSDestinationDefinition(
    name = Resources.SYNC_CONTAINER_MANAGED_QUEUE,
    resourceAdapter = "jmsra",
    interfaceName = "javax.jms.Queue",
    destinationName = "syncContainerManagedQueue",
    description = "My Sync Queue")

Servlet 3.1

Servlet 3.1 supports non-blocking I/O. You can use this when processing large data sets in servlets. Several methods were added to existing interfaces and new interfaces were introduced: ReadListener and WriteListener. You can only use this in asynchronous servlets.

AsyncContext context = request.startAsync();
ServletInputStream input = request.getInputStream();
input.setReadListener(new MyReadListener(input, context));

You can build richer protocols on top of HTTP by using the Servlet 3.1 protocol upgrade feature. This protocol upgrade is what HTML5 WebSockets use under the hood. A new HttpUpgradeHandler interface was created for this purpose.

By using the <deny-uncovered-http-methods /> tag in the application’s web.xml you can improve security of your applications. It does exactly what it says, deny all HTTP method requests for methods that aren’t covered in web.xml.

WebSocket 1.0

WebSockets are at the basis of HTML5. They enable full-duplex bi-directional communication over a single TCP connection. A good example of websocket usage is a chat client where several clients can post to and read messages of each other.

The client and server endpoints can be annotated.

@ServerEndpoint("/chat")
public class ChatEndpoint {
    @OnMessage
    public void message(String message, Session client)
            throws IOException, EncodeException {
        for (Session peer : client.getOpenSessions()) {
            peer.getBasicRemote().sendText(message);
        }
    }
}

You can declare them programmatically as well by extending the Endpoint class. By extending the EndpointConfig classes ClientEndpointConfig or ServerEndpointConfig, you can modify your endpoint configuration.
You also have different lifecycle callbacks like @OnOpen, @OnClose and @OnError at your disposal.

You have the ability to create your own encoders and decoders of messages. I can best explain this with an example.

@ServerEndpoint(value = "/encoder",
    encoders = {MyMessageEncoder.class},
    decoders = {MyMessageDecoder.class})
public class MyEndpoint {
    @OnMessage
    public MyMessage messageReceived(MyMessage message) {
        System.out.println("messageReceived: " + message);
        return message;
    }
}

 

public class MyMessageEncoder
        implements Encoder.Text<MyMessage> {
    @Override
    public String encode(MyMessage myMessage)
            throws EncodeException {
        return myMessage.getJsonObject().toString();
    }

    @Override
    public void init(EndpointConfig ec) { }

    @Override
    public void destroy() { }
}

 

public class MyMessageDecoder
    implements Decoder.Text<MyMessage> {
    @Override
    public MyMessage decode(String string)
            throws DecodeException {
        MyMessage myMessage = new MyMessage(
            Json.createReader(new StringReader(string)).
                                        readObject());
        return myMessage;
    }

    @Override
    public boolean willDecode(String string) {
        return true;
    }

    @Override
    public void init(EndpointConfig ec) { }

    @Override
    public void destroy() { }
}

EL 3.0

Historically, Expression Language has been bundled under the JSP specification. Now that part is extracted to a separate updated specification. This way, you can use EL in a stand-alone environment to:

  • Evaluate EL expressions
  • Get/set bean properties
  • Define a static method as an EL function
  • Define an object instance as an EL name
ELProcessor elp = new ELProcessor();
elp.defineBean("employee", new Employee("Charlie Brown"));
String name = elp.eval("employee.name");

JSF 2.2

In JSF you now have a standardized version of Spring web-flow: Faces Flow!
You can define reusable flows in a XML and package them in a JAR. You can also build flows using annotations and a FlowBuilder. As you can see in the example below, you can use flows and @FlowScoped beans in EL. @FlowScoped beans only live during – as the annotation implies – the flow they were created in.

@Produces @FlowDefinition
public Flow defineFlow(
        @FlowBuilderParameter FlowBuilder flowBuilder) {
    String flowId = "flow1";
    flowBuilder.id("", flowId);
    flowBuilder.viewNode(flowId, "/" + flowId + "/"
        + flowId + ".xhtml").markAsStartNode();
    flowBuilder.returnNode("taskFlowReturn1").
        fromOutcome("#{flow1Bean.returnValue}");
    flowBuilder.returnNode("goHome").
        fromOutcome("#{flow1Bean.homeValue}");
    flowBuilder.inboundParameter("param1FromFlow2",
        "#{flowScope.param1Value}");
    flowBuilder.inboundParameter("param2FromFlow2",
        "#{flowScope.param2Value}");
    flowBuilder.flowCallNode("call2").
        flowReference("", "flow2").
        outboundParameter("param1FromFlow1",
            "param1 flow1 value").
        outboundParameter("param2FromFlow1";,
            "param2 flow1 value");

    return flowBuilder.getFlow();
}

You can create reusable skins and themes using Resource Library Contracts.
JSF 2.2 now has better support for HTML pass-through attributes for HTML5-friendly markup and contains a File Upload component.

JAX-RS 2.0

Like the JMS 2.0 API, the JAX-RS 2.0 API brings you a more simplified programming model, using the Builder Pattern to invoke REST services.
You can also have asynchronous clients and servers. Restful clients work with Futures, restful servers work with suspended async responses.

Future<String> future = ClientBuilder.newClient()
    .target("http://www.foo.com/book")
    .request()
    .async()
    .get(String.class);
try {
    String body = future.get(1, TimeUnit.MINUTES);
} catch (InterruptedException | ExecutionException e) {
    // Details ommitted
}

 

@Path("/async")
public class AsyncResource {
    @GET
    public void asyncGet(@Suspended AsyncResponse asyncResp) {
        new Thread(new Runnable() {

            public void run() {
                String result = veryExpensiveOperation();
                asyncResp.resume(result);
            }

        }).start();
    }
}

You can process request and response headers by using message filters. New interfaces were created for this purpose: ClientRequestFilter, ClientResponseFilter, ContainerRequestFilter and ContainerResponseFilter. In a ClientRequestFilter implementation for example, you process the ClientRequestContext parameter to the filter() method.

Entity interceptors (not for JPA entities but JAX-RS messages!) can be used to marshal and unmarshal HTTP message bodies. For that, you have to implement the ReaderInterceptor.aroundReadFrom() and WriterInterceptor.aroundWriteTo() interface methods that take a ReaderInterceptorContext or WriterInterceptorContext respectively.

JSON-P 1.0

JSON-P is the JSON counterpart of JAXP that processes XML. You can use the new Streaming API for creating and parsing JSON. The Streaming API supports a Builder Pattern and is similar to the StAX API for XML processing.

With the JSON ObjectBuilder you create a JsonObject model in memory by adding elements.

JsonObject jsonObject = Json.createObjectBuilder()
        .add("title", "The Matrix")
        .add("year", 1999)
        .add("cast", Json.createArrayBuilder()
                .add("Keanu Reaves")
                .add("Laurence Fishburne")
               .add("Carrie-Anne Moss"))
        .build();

The JsonParser is an event-based parser that reads JSON data from a stream.

public void testSimpleObject() throws JSONException {
    JsonParser parser = Json.createParser(Thread
        .currentThread()
        .getContextClassLoader()
        .getResourceAsStream("/2.json"));

    assertEquals(JsonParser.Event.START_OBJECT, parser.next();
    assertEquals(JsonParser.Event.KEY_NAME, parser.next());
    assertEquals(JsonParser.Event.VALUE_STRING, parser.next();
    assertEquals(JsonParser.Event.KEY_NAME, parser.next());
    assertEquals(JsonParser.Event.VALUE_STRING, parser.next();
    assertEquals(JsonParser.Event.END_OBJECT, parser.next());
}

Batch Applications 1.0

In batch applications, two styles of processing exist: chunk-style processing and batchlet-style processing.
Chunk-style processing is item-oriented. You process a batch of items at a time. A batch job contains one or more steps. A step contains chunks where a certain amount of items are processed. Every chunk is processed in a separate transaction. An ItemReader reads an item for the chunk that needs to be processed. An ItemProcessor processes the item that was read. At the end of the batch or the chunk, an ItemWriter writes the results to where you want them to go: a file, an e-mail, the database, … Abstract implementations already exist for these interfaces.
Batchlet-style processing is task-oriented. It doesn’t process items but does one specific task, like sending an e-mail. It is also part of a step. Your batchlet implementation class has to implement the Batchlet interface.

You can write listeners around specific phases of jobs, steps or chunks. You have to implement specific interfaces, or extend their respective abstract classes. The relevant interfaces are:

  • JobListener
  • StepListener
  • ChunkListener
  • ItemRead/Write/ProcessListener
  • SkipRead/Write/ProcessListener
  • RetryRead/Write/ProcessListener

The job definition is specified in an XML file. You can create partitions declaratively in XML or programmatically in Java to run job parts in parallel.
In the XML you can define the complete workflow. A flow consists of elements that execute together as a unit. You can define splits that support concurrent execution of several flows. In the XML you can also create decision paths to allow conditional continuation of the batch. For the XML decision, you have to write an implementation of the Decider interface that makes the decision and returns a String that refers to the path that needs to be followed.
An example of an XML job definition is this:

<job id="myJob" xmlns="http://xmlns.jcp.org/xml/ns/javaee"
        version="1.0">
    <step id="step1" next="decider1">
        <batchlet ref="myBatchlet1"/>
    </step>
    <decision id="decider1" ref="myDecider">
        <next on="foobar" to="step3"/>
        <stop on="foobar2" exit-status="foobar3"
            restart="step3"/>
    </decision>
    <step id="step2">
        <batchlet ref="myBatchlet2"/>
    </step>
    <step id="step3">
        <batchlet ref="myBatchlet3"/>
    </step>
</job>

Here you have a small example on how to launch a batch job and print out some information about it:

out.println("About to start the job");
JobOperator jo = BatchRuntime.getJobOperator();
out.println("Got the job operator: " + jo);
long jid = jo.start("myJob", new Properties());
out.println("Job submitted: " + jid);
out.println(jo.getJobInstanceCount("myJob")
    + " job instance found");
JobExecution je = jo.getJobExecution(jid);
out.println("Job created on: " + je.getCreateTime());
out.println("Job started on: " + je.getStartTime());
out.println("Found: " + jo.getJobNames().size() + " jobs");
for (String j : jo.getJobNames()) {
    out.println("--> " + j);
}

A point of interest is that batch processing also works outside of the container.

JavaMail 1.5

A MailSession can now be defined using annotations.

@MailSessionDefinition(name = "java:comp/myMailSession",
    host = "smtp.gmail.com",
    transportProtocol = "smtps",
    properties = {
        "mail.debug=true"
    })
public class AnnotatedEmailServlet extends HttpServlet {
    @Resource(lookup = "java:comp/myMailSession")
    Session session;

    // Details ommitted
}

JCA 1.7

In the Java Connector Architecture, you can also define objects using the annotations @ConnectionDefinition and @AdministeredObjectDefinition.

@AdministeredObjectDefinition(
    className="MyQueueImpl.class"
    name="java:comp/MyQueue"
    resourceAdapter="myAdapter")

General

Over the entire platform, Java EE 7 now supports default resources, such as a default datasource, a default JMS factory etc. If you don’t specify a resource name for injection, the default resource will be injected.

@Resource(lookup="java:comp/DefaultDataSource")
DataSource myDS;
// is the same as
@Resource
DataSource myDS;

All Sun-related namespaces are changed to xmlns.jcp.org and older specifications like EJB Entity Beans (CMP, BMP, EJB QL), JAX-RPC and JAXR are pruned from the new specification.

The Future

Of course, the Java world doesn’t end at Java EE 7. They are already looking forward to Java EE 8 for quite some time now. In Java EE 8, the focus will be on leveraging the platform in the cloud, standardizing PaaS to make cloud applications portable over different cloud providers. The specification leads will also continue to improve the platform and update it for HTML5, caching, NoSQL, polyglot programming and other trending topics.

Summary

As you have seen, a lot of new features were added to Java EE 7 that makes the platform more productive, easier to develop with and more future proof.
I hope you now have an overview on the new features of Java EE 7 so you can start looking around for some more information and try out the examples and the platform yourself.

I hope you enjoyed reading this blog post and that you have learned something new from it. Please be so kind to give your constructive feedback and comments below.

Before closing off, I would like to refer to the following sources where I took the liberty of getting the images and examples from:

Posted in Java | Tagged , , , | 1 Comment

Devoxx 2013: A Retrospective

This year I had the opportunity to attend Devoxx again for the entire week. I usually try to get my hands on a Combi ticket for both the University and the Conference. As always, it was an enlightening and invigorating experience.

10802289416_bcdfab76de_b

For me Devoxx 2013 was all about Java SE 8 in general, Lambdas in particular and the new Java EE 7 stuff.
Java SE, Java EE and Cloud & BigData were the tracks I was most interested in.
It was an eye opener for me to see that the JVM Languages and Web & HTML5 tracks were getting so much attention. I’m not into those new JVM Languages and I avoid getting too much involved in – what I call – the JavaScript Web development mess. But perhaps I need to broaden my view and get to know those technologies better to be able to appreciate them.

I’ll try to give you my perspective on this year’s Devoxx: what I personally liked, what I didn’t like so much and what actions I could take to continue to improve my knowledge and be a Well-Grounded Polyglot Java Developer.
The latter is where I hope to receive some feedback from you.

Cloud & BigData

In the presentation “Data Access Patterns in this day and age (of cloud, bigdata, nosql & other buzzwords)” by Alex Snaps, the concepts of and differences between RDBMS with ORM and BigData with NoSQL were explained.
Basically RDBMS is all about ACID (Atomic, Consistent, Isolated & Durable), whereas BigData is all about BASE (Basically Available, Soft state, Eventual consistency).
Because RDBMS are ACID, they guarantee consistency immediately and at all times. This makes them inherently “slow” due to locking and their CPU affinity. “Slow” needs to be taken with a grain of salt, depending on what kind of environment your RDBMS runs on.
NoSQL stores have much lower latency. They do not pose locks on their data and distribute the processing of data across multiple nodes. This causes them to be not immediately consistent, but eventually consistent. You can use NoSQL stores in cases where eventual consistency is good enough, like in the case of immutable data.
You have different options on how to store your data, like key/value stores, document stores, graph stores, etc.
Architectures like CQRS, Complex Event Processing, etc. make full use of BigData and NoSQL concepts.

The presentations “A hint of NoSQL into my Java EE” by Guillaume Scheibel and “MongoDB for JPA Developers” by Justin Lee put these theories more into practice.
Using Hibernate OGM and Morphia respectively, they showed how you could migrate an existing application from using a relational database to using a NoSQL store.
With Hibernate OGM all you have to do is change the configuration slightly and switch the persistence provider. You also have to create a  JBoss AS 7 and MongoDB specific manifest file if you choose to use those platforms. Then you just redeploy and the application should work as before.
Morphia provides an API and annotations to perform a lightweight type-safe mapping of your Java objects to MongoDB. Here, more code changes are involved to allow the application to use a NoSQL store.

I found the “Data Access Patterns” and “NoSQL in JavaEE” presentations more informative than “MongoDB for JPA Developers” because of the fact that more basic concepts of BigData and NoSQL were explained, and developers without prior knowledge could already grasp the basics of the technology.
The demonstration of Morphia already went somewhat faster and deeper into the technology. It was also tuned specifically for Justin’s own project of which I don’t know whether or not it will become widely spread.

As a kind of side-dish I went to the  presentation “OpenShift Primer – get your Applications into the Cloud”, given by Eric D. Schabell.
As a JBoss technology evangelist he gave a broad overview of the possibilities of OpenShift, but in my opinion it was somewhat too much of marketing talk. He showed some applications that he had deployed in the cloud, but I had hoped to see more different combinations in action. Not much technicalities were presented.

Java EE

A lot of presentations at Devoxx this year were about where the Java language is heading to and the changes that are happening in the Java ecosystem.

Arun Gupta & Antonio Goncalves

Arun Gupta & Antonio Goncalves

I attended the “Java EE 7: What’s New in the Java EE Platform” presentations by Java EE evangelists Arun Gupta and Antonio Goncalves during the University, and by David Delabassee during the Conference.
The Java EE 7 presentation of Arun and Antonio was quite theoretical with only a few, in my opinion too fast, demo’s. But they were very enthusiastic speakers and gave a very good overview of the changes to the platform. Also, Arun’s demo code is publicly available at GitHub for your to play with, which is a good thing.
David’s presentation was much shorter of course, but he gave much clearer code examples. Especially the comparison between the JMS 1.1 API in Java EE 6 and the JMS 2.0 API in Java EE 7 was nice to see. I now also have a clearer view on what you can actually do with the WebSocket API.

Java EE 7 focuses on the Cloud, HTML5 and WebSockets as new technologies, support for JSON generation and parsing, simplified JMS and JAX-RS API’s, better integration of CDI, new specifications for Batch applications and Concurrency and some minor developer productivity changes in several other specifications.

I also went to some Tools in Action presentations about Java EE 7 and HTML5 development in the NetBeans and IntelliJ IDE’s. I had hoped to see some live Java EE coding, but in the case of the NetBeans presentation, it was just a pre-generated sample application that was shown. You can generate those yourself using the “New Project” wizard, so I think presenting that at Devoxx wasn’t that useful.
In the IntelliJ presentation I’ve seen a lot of neat code completion and a showcase of IntelliJ’s power. But I didn’t see that much Java EE 7-specific coding, which was what I actually came for.
Of course, you can’t show much during a presentation of just half an hour.

You can find my Java EE 7 blog post here.

To see “the other side”, I attended Sam Brannen’s presentation “Spring Framework 4.0 – The Next Generation”.
In my humble opinion, Spring doesn’t seem to add much anymore these days compared to the current Java EE. Of course, Java EE used past Spring’s ideas and standardized many of them in it’s recent specifications.
Other than a way of having your application running in a managed environment without the use of a full-blown application server, nothing seems to be really different from Java EE anymore.
Some nice additions, like conditional creation and injection of beans, don’t exactly exist in the Java EE specification. Well, you can write producer-methods in CDI but that is more cumbersome than using Spring’s new @Conditional annotation. Other additions, like the possibility of having Groovy-based bean definitions, reaches out the hand to the new JVM Languages community but don’t seem of that big importance to me.
Sadly, the presentation was purely theoretical. There weren’t any demo’s or larger code examples shown, which I found pity and not that interesting.

Java SE

With regard to the Java Core language, Devoxx 2013 was all about Lambdas!

Lambdas are a very popular feature of many modern programming languages. They support a more functional style of programming and support using a “map-filter-reduce” pattern on collection data.
Before the addition of Lambdas to the Java language similar features were already available, but were implemented using anonymous inner classes. Those were hard to read and you had to write a lot of boilerplate code.
Other than because of these facts, Lambdas are also added to the Java language because they make it easier to implement multi-threaded processing of collections, supported by the new Stream API.
The Stream API made changes to the existing Collections API necessary. To support backwards compatibility and not break any existing code that used collections, Java interfaces were changed to include the concept of default method implementations. This concept enables real multiple inheritance of behavior in Java, but specific rules are applied to avoid the “diamond of death” problem. Also, certain interfaces can be annotated as functional interfaces. Functional interfaces are interfaces with a single method, and support Lambda expressions.

José Paumard gave a very good overview on Lambdas and their usage in his presentation “Autumn Collections : from iterable to lambdas, streams and collectors”.
I found his presentation very good and quite clear, although I was new to the subject. In the end I felt like I had a basic understanding of the concepts, but I still didn’t have a complete image of what I could do with Lambdas or what their power really was. I don’t have any knowledge of functional programming principles, so maybe that’s why I didn’t understand Lambdas completely.

Hoping that I would get a better understanding of Lambdas, I attended “Lambda: A Peek Under the Hood” by Brian Goetz. Boy, that was a mistake.
I respect Brian a lot, and he is a very good and enthusiastic speaker, but for me personally the talk went much to deep. Of course, that’s what you might expect from a presentation denoting “A Peek Under the Hood”.
You already had to have some knowledge about JVM bytecode and invokedynamic to be able to follow Brian’s reasoning. I didn’t get any information out of that presentation that would give me a better understanding of what Lambdas are all about, but honestly that’s not Brian’s fault.
One thing I remember well from his presentation: “The first idea that you have, is not always the best!”

Venkat

Venkat Subramaniam

The presentation that was my favorite of Devoxx 2013 (except the JavaPosse LIVE –> Beer!!!) was “Java 8 Language Capabilities – What’s in it for you?” by Venkat Subramaniam. That guy can present!
Although it was a very fast paced presentation, mostly due to the fast paced fashion that Venkat speaks in, it was an eye opener for me. Venkat demonstrated the use of Lambdas from the ground up.
He started making a case concerning stock exchange data retrieved from a web service and implementing that the pre-JDK 8-way by using anonymous inner classes. During the examples, the performance of the application was shown. At first, processing all the data took several minutes.
He kept rewriting his example, continuously improving the code until he eventually arrived at an optimal implementation using Lambdas. He gave clear explanations of the different possibilities how you could tune the Lambda syntax and how you could optimize its usage.
In the end he changed the code to allow the data to be processed concurrently by multiple threads, only by changing one word in the code: stream() to parallelStream(). Performance went up from a few minutes to a few seconds!
Venkat has effectively demonstrated and proven the power of Lambdas.

Besides Lambdas, Java SE 8 also includes some other changes.
A new Date/Time API is built into the SDK, based on the popular JodaTime API.
New methods, based on Commons Collections are added to the Java Collection API.
The JVM PermGen space is replaced by MetaSpace where information will be stored in native memory.
Last but not least, a new JavaScript engine called Nashorn will be implemented and integrated into the SDK.

Looking somewhat further into the future, I went to take a look at the presentation “The Modular Java Platform and Project Jigsaw” of Mark Reinhold. The goals of project Jigsaw, that was postponed from Java SE 8 to Java SE 9, are modularization of the Java SDK to improve scalability, performance and security. Mark proposed a solution using *.jmod files describing which modules of the SDK you will include, and effectively create your own custom JRE. He gave an overview of which modules would likely exist and how they could be put together into a working application. This can be a solution if your space is constrained.
Mark gave a nice presentation, and the project can solve some issues still existing in the current SDK.

Web & HTML5

To become a better developer, more specifically a “Modern Java Web Developer”, I went to Matt Raible’s talk.
I had hoped to get code examples, demo’s and best-practices. But I got none of that. For me the presentation only contained a lot of buzzwords, technologies and different frameworks, some of which I even never heard of.
Again, that’s not Matt’s fault.

The thing I remember from Matt’s presentation is that you have to know ALL of it to be a Modern Java Web Developer: You have to be a polyglot programmer.
Things to keep in mind are that you have to use the JVM to its full potential: know how and when to use frameworks and technologies like Groovy, Grails, Play, Scala, etc.
Get the most out of web technologies like HTML5, CSS3 and JavaScript. Several JavaScript frameworks are available like JQuery, AngularJS or CoffeeScript and you can improve your web page design by using Bootstrap.
You also have to leverage the power of the most recent SDK’s: the new features of Java SE 7 and 8.
And Java EE is not the only thing that exists. Spring and its sub-projects also have a lot of potential, like Spring Data which can become the next Hibernate.
After that, more buzzwords and frameworks, that are unknown to me, appeared: Google Web Components, Polymer, Dart, Wro4j, … which was the point were he lost me.

JVM Languages

What struck me this year is that those new JVM Languages really get a lot of attention. Most of the time the rooms where Groovy, Scala and the like were presented, were stacked.

In the past I did not really pay attention to those languages yet, and did not investigate where they can be used or what their strengths are. So, after all Java SE and Java EE presentations I followed, on Friday it was time for something else.
I attended the “What Makes Groovy Groovy” presentation by Guillaume Laforge and “Coding in Style: How to wield Scala in the trenches” by Joshua Suereth.

Giullaume

Giullaume Laforge

I found the former presentation the best. Guillaume really showed me the basics of Groovy and compared it to the Java Core language. He started explaining Groovy’s features and strengths and showed a Java code example which could actually also run as a Groovy class. Piece by piece he removed parts of the code to reduce the boilerplate and diminish it until the very least amount of code that Groovy needed to make it run. The result was astonishing: he showed only how little Groovy code is necessary, which made the result more readable and less annoying to write. While he was reducing the code, he showed specific features of Groovy like default constructors, properties that makes getters and setters redundant, embedded Lambdas and more.
Groovy and Java can perfectly interact with each other, so a slow introduction of Groovy is possible. Groovy can best be introduced in a project for unit testing and scripting.

Joshua’s presentation wasn’t much of my liking, probably because I didn’t know anything about Scala yet and I expected an introductory talk like Guillaume’s Groovy presentation.
I persisted and tried to understand all the stuff that Joshua was explaining, but without any background in Scala and knowledge of the concepts or buzzwords that were thrown around, this presentation wasn’t my thing.

Wrapping up

As I already said in the beginning: I really enjoyed Devoxx again this year. It was very informative, a lot of fun and crowded as always.

10868905653_d1b888156c_b

I now have a clear view of the recent and upcoming changes in the Java SE and Java EE platform. I learned some new technologies and web frameworks that seem to be worth investigating. And it seems about time that I try out Groovy and/or Scala and become more of a polyglot programmer.

I bought the book “The Well-Grounded Java Developer: Vital techniques of Java 7 and polyglot programming” by Benjamin J Evans and Martijn Verburg, which I will try to read soon. This way I hope that I can already discover the basics of those things that are yet unknown to me.

I would like to kindly ask you to give me your feedback about this blog post, your experiences of Devoxx 2013 and your opinions or experiences of what someone has to do to become a Well-Grounded Java Developer.
I would also like to thank my employer for giving me the opportunity to attend this year’s Devoxx, to learn new things and to meet new people.

In the days or weeks to come, I’ll write some more blog posts about each track or technology that I discovered at Devoxx, and I’ll update this post to include links to those other posts.

Disclaimer: The statements and opinions described in this post are solely my own and do not necessarily represent the opinion of my employer, colleagues or of my clients.
With big thanks to BeJUG for the pictures: http://www.flickr.com/photos/bejug/sets/ 

Posted in Java | Tagged , , , , , , , , , , , | 3 Comments

I was wrong: Constructor vs. setter injection

Reading books or reference documentation is always good to get new ideas or to gain new insights.
While reading the Spring reference documentation, I realized I was wrong!

In one of my previous blog posts about Dependency Injection vs. Service Locator, specifically in the part “The final clash – Constructor vs. setter injection”, I said that I agreed with Martin Fowler.

Martin advocated the use of constructor injection as much as possible unless things are getting too complex. His advice is to use constructor injection to create valid objects at construction time. This advice originates from Kent Beck’s book Smalltalk Best Practice Patterns.
I myself have always used setter injection because that was the way I was taught to use Spring. But after reading Martin Fowler’s article, I agreed with having to use constructor injection more often.
In most circumstances, now I know I was wrong.

What are the problems with constructor injection?

No reconfiguration and re-injection

As the Spring reference documentation – Constructor-based or setter-based DI? states:

The Spring team generally advocates setter injection, because large numbers of constructor arguments can get unwieldy, especially when properties are optional. Setter methods also make objects of that class amenable to reconfiguration or re-injection later. Management through JMX MBeans is a compelling use case.

Some purists favor constructor-based injection. Supplying all object dependencies means that the object is always returned to client (calling) code in a totally initialized state. The disadvantage is that the object becomes less amenable to reconfiguration and re-injection.

Use the DI that makes the most sense for a particular class. Sometimes, when dealing with third-party classes to which you do not have the source, the choice is made for you. A legacy class may not expose any setter methods, and so constructor injection is the only available DI.

So indeed, using constructor injection when no setters exist, you cannot reconfigure the constructed bean by injecting new dependencies into it.
If you want to “reconfigure” the bean, you’ll have to construct a new bean instance using the new dependencies and discard the other one.

Circular dependencies

Another problem occurs when you’re having circular dependencies.
Again, the Spring reference documentation – Circular dependencies states:

If you use predominantly constructor injection, it is possible to create an unresolvable circular dependency scenario.

For example: Class A requires an instance of class B through constructor injection, and class B requires an instance of class A through constructor injection. If you configure beans for classes A and B to be injected into each other, the Spring IoC container detects this circular reference at runtime, and throws a BeanCurrentlyInCreationException.

One possible solution is to edit the source code of some classes to be configured by setters rather than constructors. Alternatively, avoid constructor injection and use setter injection only. In other words, although it is not recommended, you can configure circular dependencies with setter injection.

Unlike the typical case (with no circular dependencies), a circular dependency between bean A and bean B forces one of the beans to be injected into the other prior to being fully initialized itself (a classic chicken/egg scenario).

While it’s not a recommended scenario, you could create a circular dependency using Spring. But not by using constructor-based injection. If you want to create a circular dependency, you’ll have to use setter-based injection.

I have an example of this, which can be downloaded through the following link: spring-setter-injection.zip
You’ll need Maven to build and run the example.

[UPDATE]
You’ll notice that the ConstructorBasedCircularDependencyTest does not fail.
Thanks to Mathew, I found a way of testing if the expected exception will occur. Instead of relying on the SpringJUnit4ClassRunner to create the ApplicationContext, I create the ApplicationContext myself inside the test method, and annotate the test method to expect the UnsatisfiedDependencyException, which wraps the BeanCurrentlyInCreationException.
[/UPDATE]

To end this post, I’ll quickly show how to use constructor-based injection and setter-based injection using Spring annotations.

Constructor-based injection:

@Component
public class A {
    private B b;

    @Autowired
    public A(B b) {
        this.b = b;
    }

    /**
     * @return the b
     */
    public B getB() {
        return b;
    }
}

Setter-based injection:

@Component
public class A {
    private B b;

    /**
     * @return the b
     */
    public B getB() {
        return b;
    }

    /**
     * @param b
     *            the b to set
     */
    @Autowired
    public void setB(B b) {
        this.b = b;
    }
}

So sometimes, when you get a new insight, you have to be able to acknowledge you were wrong about something.
I hope you all enjoyed reading this post. Feel free to post your comments below.

[UPDATE]
As I said before, a circular dependency is something that is to be avoided! But when you stumble upon one and you cannot refactor it out immediately, constructor injection is not going to work in that scenario. At least not with Spring.

But reading Petri’s post Why I Changed My Mind About Field Injection?, again gave me new insights. He does have a point and I think the same goes for setter injection.
When you do have a messy constructor or a lot of setters, it means that something is wrong with your class design and your separation of concerns. Probably, you need to refactor some behavior out of the class to a separate one.
By the way, field injection can make unit testing harder. Using constructor or setter injection, you can define the dependencies from within your unit tests and pass them to your constructor or setter.

Use constructor injection for mandatory dependencies and setter injection for the optional ones, but make sure your constructor doesn’t get messy and you don’t end up with a whole bunch of setters. If that is the case, take a look at the separation of concerns.
[/UPDATE]

Posted in Java, Object Oriented Design, Spring | Tagged , , , , , , , , , , , | 15 Comments