Thursday, August 14, 2014

Using Google Guice in a Netbeans RCP application

We have been using Tapestry IoC in a Netbeans RCP application for a while. We recently added more NBMs which uses Tapestry, either for IoC or as web framework. We then ran into some dependency issues, and we decided to take the opportunity to replace Tapestry as IoC container with the JSR-330 compliant Google Guice.

The way the application is configured is pretty similar in Tapestry and Guice, so rewriting the module didn't take long. In addition, we annotated the necessary classes with @Inject, some of them residing in different NBMs.

When launching the application we got six errors with the message:

Could not find a suitable constructor in com.example.ClassName. Classes must have either one (and only one) constructor annotated with @Inject or a zero-argument constructor that is not private
Googling gave the impression that not that many people have combined Netbeans RCP and Guice. At least, I wasn't able to find any solution.

Then, I noticed that all the errors were related to classes in NBMs other than the one we were trying to setup IoC in. Netbeans uses different class loaders for each NBM, and the @Inject-annotation in the other NBMs was loaded by different class loaders than the one used by Guice. That's why Guice couldn't recognise that the constructors were correctly annotated.

The solution for us was to create a Library Wrapper Module for the jar containing the @Inject annotation, namely javax.inject.

The pom file we used for creating the wrapper, minus release- and deploy-related stuff, looks like this:

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>

    <groupId>com.example</groupId>
    <artifactId>javax.inject</artifactId>
    <version>1</version>
    <packaging>nbm</packaging>

    <name>Wrapper for javax.inject</name>

    <dependencies>
        <dependency>
            <groupId>javax.inject</groupId>
            <artifactId>javax.inject</artifactId>
            <version>1</version>
        </dependency>
    </dependencies>

    <build>
        <plugins>
            <plugin>
                <groupId>org.codehaus.mojo</groupId>
                <artifactId>nbm-maven-plugin</artifactId>
                <version>3.13</version>
                <extensions>true</extensions>
                <configuration>
                    <publicPackages>
                        <publicPackage>javax.inject.**</publicPackage>
                    </publicPackages>
                </configuration>
            </plugin>
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-jar-plugin</artifactId>
                <version>2.5</version>
                <configuration>
                    <useDefaultManifestFile>true</useDefaultManifestFile>
                </configuration>
            </plugin>
        </plugins>
    </build>

</project>


Saturday, May 14, 2011

Analyzing Java memory usage

Even when using Java, one might encounter memory issues. In those cases, it is good to know that there are tools to help you out, some of which are bundled with Sun's JDK distribution.

Creating a Java heap dump
The first thing to do is obtaining a heap dump, in which Java writes the current content of its heap to a file. There are many ways to achieve this. This can be done explicitly by issuing commands from a command line, by using Java's JMX, or by instructing the JVM to make a heap dump when an OutOfMemoryError eccurs.

The simplest way to get a heap dump from a running system is by issuing the command jmap from a command line, for instance

$ jmap -dump:live,format=b,file=heap.bin 6114

will dump the live objects in the heap of the Java process with process ID 6114 to a file called heap.bin. The process ID can be found by using jps or ps, if on a Unix system.

But what if one wants a dump of the heap as it is when an OutOfMemoryError occurs? One of the arguments the JVM accepts is: -XX:+HeapDumpOnOutOfMemoryError. As the name of the argument suggests, this tells the JVM to create a dump of the heap when an OutOfMemoryError occurs. If you also supply the argument -XX:HeapDumpPath, you can instruct the JVM on where to put the generated dump file (by default it is created in the working directory of the VM). For instance:

$ java -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/tmp/heap.bin ...
 
Analyzing the heap dump
The heap dump file obtained by the above is a binary file not suited for manual inspection. However, also in the JDK distribution is a tool called jhat. This is a tool for analyzing a binary heap dump file (Java Heap Analysis Tool). It parses the heap file and launches a web server, allowing you to navigate the information in the heap in a web browser. The command

$ jhat heap.bin

will parse the heap in file heap.bin. When launching a web browser and entering localhost:7000 in the address bar, I am presented with the following:


This is simply a listing of all classes loaded. When clicking on one of the class names, a new screen appears with more details on the selected class:


It is worth noticing that in the far bottom of the first screen (All classes), there are a number of useful links, for instance the "Heap Histogram", which shows the number of instances and the total size for each class. Another link take you to a new page where you can execute Object Query Language (OQL) queries. One example of a OQL query is:
select s from java.lang.String s where s.count >= 100
which will show you all Strings of length 100 or more. See the "OQL Help" page for more on OQL.

Another way of inspecting the obtained heap dump is with the NetBeans IDE. Simply download the simplest version of it, launch it, and import your heap dump with "Load Heap Dump..." under "Profile" on the menu bar. This will give you the same information as the jhat-tool, but in a more user-friendly way.

Wednesday, March 16, 2011

Grails - Excluding plugin dependencies from war

I started using maven to build my Grails project, and upgraded Grails to version 1.3.7. Running mvn grails:run-app with Tomcat works fine, but when I tried to generate a war file with mvn package, and deploy that to Jetty, it failed:

Exception in thread "main" java.lang.IllegalAccessError: tried to access field
org.slf4j.impl.StaticLoggerBinder.SINGLETON from class org.slf4j.LoggerFactory
at org.slf4j.LoggerFactory.<clinit>(LoggerFactory.java:60)

Googling led me to the SLF4J FAQ (http://www.slf4j.org/faq.html) which says that this is caused by slf4j-api <= 1.5.5 being incompatible with an slf4j binding > 1.5.5. My application declares slf4j-api 1.5.8 and slf4j-log4j12 1.5.8 as dependencies. Running mvn dependency:tree didn't reveal any slf4j-api <= 1.5.5, but, indeed, in WEB-INF/lib in the application's war-file there was a slf4j-api-1.5.2.jar. Where did it come from?

When running mvn package with the log level of Ivy resolver set to "info", it seems that it was pulled in by Hibernate, through its dependencies hibernate-core and hibernate-commons-annotations. Problem solved! Grails has great support for determining which of the plugin's dependencies to exclude. In the Grails Reference Documentation it says:

If a plugin is using a JAR which conflicts with another plugin, or an application dependency then you can override how a plugin resolves its dependencies inside an application using exclusions. For example:

plugins {
   runtime( "org.grails.plugins:hibernate:1.3.0" ) {
     excludes "javassist"
   }
}

This means that all I have to do is include the following in my BuildConfig.groovy:

runtime("org.grails.plugins:hibernate:1.3.7") {
  excludes "slf4j-api"
}

No luck. slf4j-api-1.5.2.jar was still there. Googling suggested that there is a bug in Grails that prevents it from excluding transitive dependencies of plugins (GRAILS-6910). Allright. Let me exclude hibernate-core and hibernate-comons-annotations, then, and include them as dependencies with slf4j-api excluded, ie.

dependencies {
  compile("org.hibernate:hibernate-commons-annotations:3.1.0.GA") {
    excludes "slf4j-api"
  }
  compile("org.hibernate:hibernate-core:3.1.0.GA") {
    excludes "slf4j-api"
  }
}

plugins {
  runtime("org.grails.plugins:hibernate:1.3.7") {
    excludes "hibernate-core", "hibernate-commons-annotations"
  }
}

Still no luck.

Finally, Google led me to this page: Excluding files from a WAR with Grails – the right way. There, Marc Palmer suggested adding the following to Config.groovy to exclude a certain .jar-file from the war file:



grails.war.resources = { stagingDir ->
  delete(file:"${stagingDir}/WEB-INF/lib/slf4j-api-1.5.2.jar")
}

Ran mvn package while I held my breath. Once finished, I checked the generated war file, and, to my disappointment, the slf4j-api-1.5.2.jar was still there.

However, reading through the mentioned post's comments, some guy suggested that it should be added to BuildConfig.groovy, rather than Config.groovy.


Tried that. And finally, no slf4j-api-1.5.2.jar in the generated war file, only my own slf4j-api-1.5.8.jar.

Deployed the war to Jetty, and it worked like a charm.

It seems to me that Grails' dependency manipulation methods are rather buggy at the moment. I find this an acceptable workaround.

Sunday, November 28, 2010

Decreasing page load time in a Java web application

This post will mention some of the things you can do to decrease the page load time in a Java-based web application with regards to Yahoo!'s list of best practices for speeding up your web site (http://developer.yahoo.com/performance/rules.html).

Minimize HTTP Requests
When it comes to external stylesheets and JavaScript files, Web Resource Optimizer for Java (wro4j) can be used to reduce the number of requests required to fetch all content. During development, I find it more maintainable to have stylesheets and JavaScript files split into several files. wro4j merges these files together at runtime, so that one still can have them split into several files, but have only one request for them. How to setup and use wro4j is explained well in http://code.google.com/p/wro4j/wiki/GettingStarted.

Regarding reducing the number of requests for images, CSS sprites can be used. Use, for instance, SpriteMe (http://spriteme.org) or CSS Sprite Generator (http://spritegen.website-performance.org) to create the sprite. I use CSS Sprite Generator to create a starting point, and then modify and maintain the sprite by hand (GIMP). Be aware of the placement of the images in the sprite so that adjacent images don't show up in the wrong places in your page.

Add an Expires or a Cache-Control Header
By default, wro4j sets the Expires-header to be ten years in the future for content served through the wro4j filter. This tells the client browser that the content won't change for ten years, so it shouldn't bother to ask for it again before then. Be aware that the browser won't even bother to ask if the content has changed, so, unless the content is removed from the browser's cache, it will use it's cached version for ten years. This means that if you modify one of the files served by wro4j, but don't change the reference to the group containing the file, the clients won't get the updated file for ten years.

Since wro4j only handles CSS and JavaScript files, you have to find another way of setting the Expires header on other content, such as images. This can be accomplished by implementing a filter that sets the header on specific content. See http://juliusdev.blogspot.com/2008/06/tomcat-add-expires-header.html for an example.

Gzip Components
Gzip is a compression method that generally reduces the size of your files about 70%. Once again, wro4j does the job for you. By setting the filter's gzipResources init parameter to TRUE, wro4j will gzip your CSS and JS resources. Images shouldn't be gzipped because they are compressed already.

Minify JavaScript and CSS
With the danger of repeating myself, wro4j does it for you. Wro4j minimizes both your JavaScript and CSS resources by default. One useful thing to note is that if you have set the filter's configuration init parameter to DEVELOPMENT, you can add the query parameter minimize=false to the URL of your CSS or JavaScript groups to disable minimization. I, at least, find it difficult trying to find errors in minified content.