Insecure Libraries

Sonatype and Aspect Securitiy recently published a study titled “The Unfortunate Reality of Insecure Libraries” (registration required). The bottom line is that 80% of the code in today’s applications comes from libraries and frameworks and that the risk of vulnerabilities in these components is widely neglected. Sonatype and Aspect Security have analyzed the downloads from Maven Central and found that 26% of the downloaded libraries have known vulnerabilities.

Of course this is marketing material but nevertheless it contains a lot of truth. Many organizations lack a process to ensure the libraries they are using in their applications are up to date. The larger an organization is the higher the probability that they prefer not to update their dependencies because they fear to break something. Never touch a running system – even if it is insecure.

You can argue that the metrics they use are inaccurate as a vulnerability in a library that is used in an application does not imply that the application itself is vulnerable. However if the application is not affected by the vulnerability of a dependant library this is more often by coincidence than by analysis and informed decision.

For applications that we are building for our customers we have a few rules in place that lower the risks involved:

  • We prefer proven frameworks and libraries with a good security track
  • We check the general code quality of frameworks and libraries we use before we include them
  • Each iteration starts with updating the dependencies of our applications to their latest stable version

While this works well for applications while they are built it does not help for the phase where no active development takes place. It also doesn’t help with security issues that are discovered and need an immediate fix for the release currently deployed to production.

Therefore we offer support contracts for our applications that cover the latest production release in supported environments. To minimize cost we do not support older versions or milestone, beta and candidate releases.

For those versions we provide our customers with security fixes for vulnerabilities found in one of the supported products or the libraries used in one of these products. This of course includes monitoring the libraries and frameworks we use for reported vulnerabilities and security issues.

We also encourage our customers to plan for maintenence releases at least every six months to keep the dependencies up to date even if there are no new features to be included.

Vulnerability in ApacheDS 1.5

Apache Directory Server (ApacheDS) is an LDAP server implemented in Java from the Apache Software Foundation.

The server supports a number of password hash functions including MD5, SHA, SMD5 and SSHA so that the clear text password used for authentication is not stored on the server and an attacker who gains access to the data can not use it for authentication unless he breaks the hash.

Password checks are implemented in the class SimpleAuthenticator that includes the following code:

// Get the stored password, either from cache or from backend
byte[] storedPassword = principal.getUserPassword();

// Short circuit for PLAIN TEXT passwords : we compare the byte array directly
// Are the passwords equal ?
if ( Arrays.equals( credentials, storedPassword ) )
    if ( IS_DEBUG )
        LOG.debug( "{} Authenticated", opContext.getDn() );

    return principal;

The provided credentials are compared to the stored password which can either be a plain password or the hash of a password. This causes ApacheDS to allow users to authenticate either with the password or the corresponding hash. So authentication of a user with the password abc which is stored as the salted SHA1 hash {SSHA}lIifvzM278asTV8NtjfO3EV3z4caaC5uJPouWw== will succeed if either the original password or the hash is provided.

Both calls will succeed equally:

ldapsearch -h localhost -p 10389 -D uid=admin,ou=system -x -w 'abc'
ldapsearch -h localhost -p 10389 -D uid=admin,ou=system -x \
  -w '{SSHA}lIifvzM278asTV8NtjfO3EV3z4caaC5uJPouWw=='

An attacker who gains access to the stored hash will thus be able to successfully authenticate as any user without having to know the password.

It seems all versions of ApacheDS 1.5.x including 1.5.7 are vulnerable. The new 2.0 branch does not seem vulnerable.

I’ve notified the Apache Security Team on 2012-03-12 and informed them on 2012-03-15 that I will publish this blog entry on 2012-03-19 after they remained silent for three days.

Emmanuel Lécharny finally replied that he does not consider 1.5.7 stable and that

People using the server *must* use 2.0.0-Mx versions, even if this version is not stabilized yet.

The reason they still link to the vulnerable 1.5.7 version in their “Latest Downloads” section without a word on the security issue is

Pure laziness… Sadly, we are knees deep into coding, and we have neglected the web site and the doco :/

Seems priorities are more on publishing good news.

Update 2012-03-27: Now more than two weeks after the notification they had plenty of time writing emails explaining why this isn’t a problem but apparently no time to remove the link to the vulnerable version from the Latest Downloads section.

ClassLoader Leaks by Oracle

I recently had trouble with a web application deployed on Tomcat that leaked its ClassLoader every time it was redeployed resulting in OutOfMemoryErrors after a few redeployments. This is quite nasty if you plan to do continuous deployment and don’t want to restart the servlet container with each deployment.

Recent versions of Tomcat include some code that makes you aware of problems when you undeploy the application:

SEVERE: The web application [] registered the JDBC driver [oracle.jdbc.OracleDriver] but failed to unregister it when the web application was stopped.
 To prevent a memory leak, the JDBC Driver has been forcibly unregistered.
SEVERE: The web application [] appears to have started a thread named [Thread-14] but has failed to stop it. This is very likely to create a memory leak.
SEVERE: The web application [] appears to have started a thread named [Thread-15] but has failed to stop it. This is very likely to create a memory leak.
SEVERE: The web application [] appears to have started a thread named [Thread-16] but has failed to stop it. This is very likely to create a memory leak.
SEVERE: The web application [] appears to have started a thread named [Thread-17] but has failed to stop it. This is very likely to create a memory leak.
SEVERE: The web application [] appears to have started a thread named [Thread-18] but has failed to stop it. This is very likely to create a memory leak.

As you can see Tomcat managed to unregister the JDBC driver that the application had failed to unregister but could do nothing regarding the threads that had been started but not stopped.

I ran the application with YourKit attached to check that the WebappClassLoader had actually leaked and to see what those threads were that prevented it from being garbage collected. The “Paths from GC Roots” view in YourKit is well suited for this:

ONS Leaking Threads

As you can see there are four threads from ONS that prevent the ClassLoader from being garbage collected: oracle.ons.SenderThreads and oracle.ons.ReceiverThreads.

I wrote a small ServletContextListener that shuts down ONS to get rid of them. After that I noticed that Oracle registered a OracleDiagnosabilityMBean that I had to unregister. Finally I made sure the JDBC drivers that the application had registered were properly unregistered from DriverManager.

With those changes in place the application undeployed well and was fully garbage collected.

Here is the code:

import oracle.ons.ONS;
import oracle.ons.SenderThread;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.util.ReflectionUtils;

import javax.servlet.ServletContextEvent;
import javax.servlet.ServletContextListener;
import java.lang.reflect.Field;
import java.lang.reflect.Method;
import java.sql.Driver;
import java.sql.DriverManager;
import java.util.ArrayList;
import java.util.Enumeration;
import java.util.Hashtable;
import java.util.List;

public class CleanUpListener implements ServletContextListener {
    private Logger logger = LoggerFactory.getLogger(getClass());

    public void contextInitialized(ServletContextEvent sce) {
        // do nothing

    public void contextDestroyed(ServletContextEvent sce) {

    private void shutdownOns() {"Shutting down ONS");
        final Method getRunningONS = ReflectionUtils.findMethod(ONS.class, "getRunningONS");
        final Method shutdown = ReflectionUtils.findMethod(ONS.class, "shutdown");
        final ONS ons = (ONS) ReflectionUtils.invokeMethod(getRunningONS, null);
        if (ons == null) {
        ReflectionUtils.invokeMethod(shutdown, ons);

        final Field senders = ReflectionUtils.findField(ONS.class, "senders");
        final List<SenderThread> senderThreads = (List<SenderThread>) ReflectionUtils.getField(senders, ons);
        if (senderThreads == null) {
        final Method stopThread = ReflectionUtils.findMethod(SenderThread.class, "stopThread");
        for (SenderThread senderThread : senderThreads) {
            ReflectionUtils.invokeMethod(stopThread, senderThread);
    private void deregisterJdbcDrivers() {"Deregistering JDBC Drivers");
        final Enumeration<Driver> driverEnumeration = DriverManager.getDrivers();
        final List<Driver> drivers = new ArrayList<Driver>();
        while (driverEnumeration.hasMoreElements()) {

        for (Driver driver : drivers) {
            if (driver.getClass().getClassLoader() != getClass().getClassLoader()) {
                logger.debug("Not deregistering {} as it does not originate from this webapp", driver.getClass().getName());
            try {
                logger.debug("Deregistered JDBC driver '{}'", driver.getClass().getName());
                if ("oracle.jdbc.OracleDriver".equals(driver.getClass().getName())) {
            } catch (Throwable e) {
                logger.error("Deregistration error", e);

    private void deregisterOracleDiagnosabilityMBean() {
        final ClassLoader cl = Thread.currentThread().getContextClassLoader();
        try {
            final MBeanServer mbs = ManagementFactory.getPlatformMBeanServer();
            final Hashtable<String, String> keys = new Hashtable<String, String>();
            keys.put("type", "diagnosability");
            keys.put("name", cl.getClass().getName() + "@" + Integer.toHexString(cl.hashCode()).toLowerCase());
            mbs.unregisterMBean(new ObjectName("", keys));
  "Deregistered OracleDiagnosabilityMBean");
        } catch ( e) {
            logger.debug("Oracle OracleDiagnosabilityMBean not found", e);
        } catch (Throwable e) {
            logger.error("Oracle JMX unregistration error", e);

Book Review: Modular Java

Modular Java by Craig Walls is a book on building modular Java applications on OSGi platforms.

Published in the Pragmatic Bookshelf series it keeps up to the standards of that great series by presenting content that matters in a format that makes you try it out immediately.

OSGi’s value proposition is to keep complexity in software products manageable. It keeps modules isolated from each other and encourages loose coupling through publishing and consuming services. You can think of OSGi as an incarnation of service oriented archietcure (SOA) within the Java Virtual Machine. OSGi has its roots in the embedded systems environment and became popular for desktop applications with Eclipse using it as their core infrastructure. From embedded to desktop OSGi is currently coming to the server side.

Craig’s book introduces the basics of OSGi, shows how it isolates modules by its unique approach to classpath handling and gives you an overview of the concept of OSGi services. One easy to follow example application is used consistently throughout the book to show the various aspects.
Most of the time the book uses Equinox as runtime but Felix and Knopflerfish are also mentioned. For building bundles Modular Java makes use of Pax especially Pax Construct. It does not mention Bundlor, Bnd or Tycho in depth. Especially the Maven based Tycho stack sounds really promising so it’s unfortunate that it isn’t covered. I guess however this is just an attribution to the current speed of development in the OSGi tooling space.

Spring Dynamic Modules are an attempt to bring the principles of Spring Framework to OSGi. In the spirit of Spring Framework Dynamic Modules (DM) build on proven solutions (OSGi in this case) and make them easier to use. They eliminate a lot of boilerplate code that is normally require to handle OSGi services that may appear and disappear at any time. Spring-DM also provides integration with Spring Application Contexts and has support for web applications through an extender. Spring-DM is covered really well by the book. An appendix describes the new OSGi Blueprint Services that are an attempt to standardize the ideas of Spring-DM. Spring’s new dm Server is not covered.

The book focuses on the core concepts and shows the benefits of using OSGi for application development. The target audience are experienced Java developers. It is very well written, easy and fun to read and serves as a great introduction. I recommend the book to Java developers who consider making use of OSGi in future projects.

Code Style: Final Arguments

Java allows you to make arguments final by declaring them as final in the argument list of the method declaration:

public void doSomething(final String foo)

This means that inside the method you cannot change what the argument reference points to, i.e. it prevents you from doing things like this:

  foo = "bar";

It is bad style to change the reference an argument points to. You should treat all arguments as if they were marked final. It would have even been a good idea to make this a language feature and have Java treat all arguments as final by default.

However does this justify to declare all arguments as final? Some people suggest this though I haven’t seen this in the wild very often.

The pros:

  • enforces not changing the reference an argument points to

    The cons:

    • makes method signatures longer and harder to read
    • takes longer to write, being lazy is a virtue

    There is one thing to note: Changing the reference the argument points to does not actually change the value of the caller’s variable passed to the method:

    public void testDoSomething()
      String foo = "foo";
      System.out.println(foo); // still prints "foo"
    public void doSomething(String foo)
      foo = "bar";

    However if you pass an object that is not immutable and you change the state inside the called method you actually do change the caller’s data. This is nothing a final modifier can prevent you from doing though it may be a source of trouble if not properly stated in the contract of the method.

    So to sum it up: You should not change the reference an argument points to as it causes confusion. You can prevent this by adding the final modifer. Doing so however clutters your code and thus shouldn’t be done (except when needed to use the argument in an inner class). Pay attention to not change the state of an object passed to a method if that’s not part of the method’s contract.

    The same applies to some extend to final local variables. Use them where they make your code easier to understand but not everywhere you could. If you are a fan of final have a look at Scala‘s val keyword.

Nexus vs. Artifactory

Until now we didn’t use a repository manager for Maven. Our repos were a plain directory structure on the file system served by Apache. Uploading was done using Apache’s WebDAV capabilities with a simple authentication against our LDAP directory:

<Location /maven>
  Options Indexes

  DAV On
  AuthType Basic 
  AuthName "reucon Maven Repositories"
  AuthBasicProvider ldap
  AuthLDAPURL ldap://
  AuthLDAPBindDN uid=httpd,ou=techusers,o=myorg
  AuthLDAPBindPassword secret
  AuthzLDAPAuthoritative off

  Require valid-user
  FileETag None

We are maintaining four repositories: one for our public Open Source artifacts and one for proprietary internal artifacts along with corresponding snapshot repositories. Access to the internal repository was limited based on an IP address range.

There are multiple reasons for us to use a repository manager:

  • Unified access to repositories

    In the old days all you needed was the central repo (formerly known as ibiblio). Times have changed and now many of our projects require artifacts from a variety of different repositories. It seems many organisations prefer having their own repos instead of publishing to central. This includes SpringSource, JBoss, Codehaus and several snapshot repos like for Apache. Maintaing a list of these repos in each developers settings.xml is a pity and including them in the poms makes things even worse in the long run.
  • Finer grained access control
    On the one hand we need access to our internal repo from outside of our internal network, so IP based access control no longer works well. On the other hand not all developers should be allowed to publish releases. Some kind of role based access control was needed.
  • Automated creation of the Nexus index
    A Nexus index is basically a zip file containing a lucene index of the artifacts in a repository. Most Maven IDE plugins now support searching for artifacts when adding dependencies to a project. To make this work the IDE must be be able to download an up to date index of the repositories.
  • Web based artifact search
    You may know a web site to search for artifacts in the central repo. I’ve often used it to find the correct version and maven coordinates for a dependency. A similar solution for our internal repositories would be nice.

There are a lot more reasons to use a repository manager like faster builds through caching of artifacts, black- or whitelisting of artifacts based on corporate standards but those listed above were the key factors for us.

There are three products that can be used: Apache Archiva, Sonatype Nexus and JFrog’s Artifactory. There is a feature matrix
that shows their features.

I dropped Archiva because its development is rather slow and it is missing some important features like grouped repositories. So Nexus and Artifactory remained. I came across two blog postings from January:
Sonatype’s comparison and
JFrog’s response. Combined they provide a lot of insight. Here is my own comparison:


Good LDAP integration is a must have. Artifactory supports this out of the box, Nexus does not include LDAP support in its Open Source edition. It is a Nexus Pro feature. I do understand that Sonatype is trying to sell its Pro version but LDAP support is really a basic and vital feature. Fortunately it is not too difficult to implement a custom authenticator for Nexus and in fact there is already a project at Google Code called nexus-ldap that adds free LDAP support to Nexus. Both Artifactory and Nexus support fine grained role based authorization. One problem I faced with Artifactory was that requiring authentication seems to be a global setting, so you either require authentication for all repositories or for none. This is less flexible than Nexus which allows us to make our Open Source repository available without authentication and requires authentication only for deployment and for our internal repositories.
Artifactory has a nice additional feature that eliminates the need to store the user’s password in Maven’s settingx.xml in clear text by encrypting the password with a user specific key stored in Artifactory. This is an interesting approach and I would like to see this concept being used more widely (e.g. for Subversion).


How the repository manager stores artifacts and meta data is the biggest difference between Artifactory and Nexus. Artifactory uses a Java Content Repository (JCR) that can optionally be hosted in a MySQL database.
Nexus stores artifacts and meta data in the file system. It uses the maven layout so it is easy to access the repositories managed by Nexus externally. This becomes handy not only for migration but also when synchronizing to central through rsync. Though Artifactory offers an export feature having my repository data available directly on the file system makes me feel better.

Searching and Indexing

Both Nexus and Artifactory publish indexes (based on Lucene) and provide a web interface for searching artifacts stored in the repository. Nexus takes this one step futher and also allows searching for artifacts in proxied repositoreis not yet stores in the local repository. This is really handy and eliminates the need to use external sites to search for artifacts and they current version.


Before I found nexus-ldap I was about to choose Artifactory over Nexus. After that I prefer Nexus for the file system based storage and better searching.


Maven Release with Subversion 1.5 and 1.6

There is a problem with the maven-release-plugin when used with recent versions of Subversion. It stared at version 1.5.1 of Subversion and made the release:prepare command fail because Maven was no longer able to tag the release.

You may have encountered the following error with release:prepare:

svn: File '...' already exists

One reason for this can be SCM-406.

For some time I’ve worked around this issue by doing my releases on a machine with an older version of Subversion.

A better solution is to use the latest version of the maven-release-plugin (2.0-beta-9 at the moment) and set the remoteTagging property to true:

        <preparationGoals>clean install</preparationGoals>

Keep in mind that you should always specify the exact version of the plugins you are using. This not only makes sure you get what you need it also ensures that the build is reproducible in the future and works consistently accross different machines.

JBoss Admin Console

JBoss AS 5.1.0-CR1 is out and finally includes a brand new admin console:

After installing JBoss AS 5.1.0 you can access it at http://localhost:8080/admin-console. Login with user “admin” and password “admin”.

The project behind the admin console is Embedded Jopr a web-based application for managing and monitoring an instance of JBoss AS. It is the little brother of Jopr. Jopr is a full-fledged systems management tool that helps managing and monitoring multiple instances of JBoss AS, Apache Webserver, Tomcat and more. The advantage of Embedded Jopr is that it is available out of the box and does not need any external resources like a database or separate agents. I am sure it makes operating JBoss AS a lot easier and more fun.

Give it a try and download the latest version.

IntelliJ IDEA 8.0

JetBrains has released IDEA 8.0. It has support for a few new frameworks like JBoss Seam, Struts 2, GWT 1.5, RESTful webservices and updated support for Spring like Spring 2.5, Spring Webflow, Spring MVC and Spring Dynamic Modules. Templates languages like FreeMarker and Velocity are now supported as well as improved support for XPath and XSLT.
In version 7.0 JetBrains introduced support for Maven which has been further enhanced in 8.0:

  • Creating new projects from Maven archetypes.
  • Resource filtering with built-in Make.
  • Manually added libraries and modules dependencies support.
  • Completion of artifacts’ groupId, artifactId, version, exclusions, based on downloadable repository indices.
  • Code completion for plugin configuration.
  • Parent and dependencies generation in pom files with Alt+Insert.
  • Add Maven Dependency Quick Fix for unresolved classes in java code.
  • Support for Web Overlays.

This makes IDEA 8 the best choise for the development of mavenized projects.

Additionally the built-in Subversion connector has been updated for Subversion 1.5 and has support for merge-tracking.


Spring’s New Maintenance Policy

SpringSource, the company behind the popular Spring Framework has announced a new maintenance policy: SpringSource Enterprise Maintenance Policy, effective September 2008.
After a lot of discussion in the community they have now added a Frequently Asked Questions document.

Spring Framework was originally created to overcome the limitations of Enterprise Java Beans (version 1 and 2) and make it easier to build J2EE applications. It has introduced dependency injection to a broad audience and changed the way many enterprise applications are built today. For many years it has been a vendor independant Open Source project available under the Apache Software License. Some time ago the creators of Spring Framework started their own company, received venture capital and things started to change. They’ve added new products like a new application server, bought Covalent and are looking for opportunities to gain some money.

In contrast to their new proprietary products which require a commercial license there has not been a real opportunity to make money from SpringFramework itself. Community support was fine, regular maintenance updates fixed the issues that were discovered and there was no need for commercial support. The recent announcement of their new maintenance policy seems to be their answer to that. They try to create a need for their support offerings. So the new policy basically states:

  • Free maintenance updates will only be available for three months after a major release
  • Maintenance releases will be available to paying customers under a commercial license for three years after a major release
  • Bug fixes will be commited to a maintenance branch but minor releases will not be tagged after the three month period so the community will not know which versions are stable

Though the major releases will remain Open Source the bug free minor versions (three months later) will not. Spring Framework 2.0 was released in October 2006, Spring Framework 2.5 in November 2007 which means that the community will be without minor releases for more than 9 months if the frequency of their releases remains similar.

Sure, you can always build from the sources but this looks like a bad idea given that SpringSource refuses to tag consistent and tested versions.

I can understand the desire to make cash from SpringFramework but I am not sure this way will be successful. For me the products of SpringSource have lost their strong advantage of being vendor independant and fully Open Source. Upcoming projects will have to consider this fact and investigate alternatives.

Update 2008-10-08

SpringSource has listened to the community and updated its maintenance policy: A Question of Balance: Tuning the Maintenance Policy. They’ve dropped the 3 month window and will provide community releases from trunk for each version of Spring while it remains the trunk or until the next version is stable.