<< October 2011 | Home | December 2011 >>

100% code coverage, Hibernate Validator and Design by Contract

Code coverage in unit testing was one of the first things I learned in my software engineering career. The company I worked at taught me that you should have 100% coverage as a goal, but achieving it does not mean there are no bugs in the system. At that time, I worked at a company whose big thing was that they delivered very reliable billing software to telecomms operators. We used to invest as much time writing unit tests as we did writing the code. If you included unit tests, integration tests, system tests and acceptance tests, more time was spent testing than designing and implementing the code which ran in production. It was impressive and I have never worked with or for a company with that kind of model since, although I am sure there are many companies which operate like that.

The other day I was thinking back to those days and reading up about unit testing to refresh myself in preparation for a unit testing course I'm attending soon (don't want the instructor to know more than me!) and I started to wonder about what kind of code could be fully covered during unit testing, but which could still contain a bug. While I learned that 100% coverage should be the goal, in over 10 years I have never worked on a project which achieved that. So I have never surprised to find a bug in code which I thought was "fully" tested.

It's fun write whacky code - you don't normally try and write bugs :-) I started with the specification:

    /**
     * @return String "a" or "b" depending upon the values
     * passed to the init method. If variable "a" is true,
     * then string "a" is returned. If variable "b" is true,
     * then "b" is returned. If neither is true, then "" is
     * returned. Variable "b" is important than "a", so if
     * both are true then return "b".
     */

Now, the tricky part was to write code which contained a bug yet was easy to cover with incomplete tests. I came up with the following. The specification above is for the "getResult()" method.

public class SomeClass {

    private boolean a;
    private boolean b;

    public SomeClass init(boolean a, boolean b) {
        this.a = a;
        this.b = b;
        return this;
    }

    public String getResult() {
        String s = "";
        if(a) {
            s = "a";
        }
        if(b){
            s += "b"; // <-- oops!
        }
        return s;
    }
}

The "bug" is in the method "getResult" and the problem is that instead of just an assignment, the "plus equals" operator has been used. An "else" would probably make the code a little safer too. But I think code like this is quite typical of the buggy code that finds its way into productive systems. Even the best programmers have lapses of concentration where they write typos like this (especially in open plan offices!).

The unit test which a programmer trying to achieve 100% coverage, would look something like this:

    @Test
    public void test() {
        assertEquals("a", new SomeClass().init(true, false).getResult());
        assertEquals("b", new SomeClass().init(false, true).getResult());
    }

Using Emma for Eclipse I measured complete coverage. But wait! There is still a bug. The code does not do what it says in the Javadoc spec. If I initialise the object with "true, true", then the result will be "ab", because of the "plus equals" operator. So even though I have 100% coverage, I still have a bug.

I asked myself what that means to me as a programmer. What do I need to look out for, when writing tests. Imagine the code above was tucked away among hundereds of other lines of code, then the chances of seeing it are really quite small. The test wouldn't be just two lines long and the problem wouldn't be jumping out of the page.

One way to look at the problem is to say that there is a bug because the code isn't fulfilling its contract. OK, I use "contract" in the loose sense of the word, but the Javadoc is basically a contract. It tells anyone calling that method what to expect, yet the codes is not doing what the user expects.

So perhaps one solution is to not only entirely exercise the code, but entirely exercise the contract? Is there a way to translate the Javadoc contract into something more concrete which the testing tools can help me check? Yes, namely my using some kind of Design (or Program) by Contract (DBC or PBC) framework. JSR-303 isn't strictly DBC, but close. It lets you use annotations to state your expectations about parameters passed to methods, as well as your expectation about the result being returned. You can create your own complex constraints quite easily. I added the following annotations to my code to help in my quest for bug free code:

    @Valid
    @NotNull
    @Size(min=0, max=1)
    public String getResult() {
        ...

Now, method validation (validating method calls, rather than validating the bean itself) is something which comes as an extra in Hibernate Validator, and which really isn't part of JSR-303 - it's only described in Appendix C as optional. To test this, I used Google Guice to add an AOP interceptor to any methods marked with the @Valid annotation, and in that interceptor, I call the Hibernate Method Validator. I end up with something like this:

.
.
.
    Injector injector = Guice.createInjector(new AbstractModule(){
        protected void configure() {
            bindInterceptor(Matchers.any(),
                    Matchers.annotatedWith(Valid.class),
                    new MethodValidatorInterceptor());
        }
    });
    SomeClass someClass = injector.getInstance(SomeClass.class);
    someClass.init(true, false);
    assertEquals("a", someClass.getResult());

    someClass = injector.getInstance(SomeClass.class);
    someClass.init(false, true);
    assertEquals("b", someClass.getResult());
.
.
.
public class MethodValidatorInterceptor<T> implements MethodInterceptor {

    public Object invoke(MethodInvocation invocation) throws Throwable {

        //validate all inputs
        Set<MethodConstraintViolation<T>> mcvs =
            methodValidator.validateAllParameters((T)invocation.getThis(),
               invocation.getMethod(), invocation.getArguments());
        if(mcvs.size() != 0){
            throw new IllegalArgumentException(String.valueOf(mcvs));
        }

        //call through to the actual method being called
        Object ret = invocation.proceed();

        //validate result
        mcvs = methodValidator.validateReturnValue((T)invocation.getThis(),
            invocation.getMethod(), ret);
        if(mcvs.size() != 0){
            throw new IllegalArgumentException(String.valueOf(mcvs));
        }

        return ret;
    }
}
.
.
.

The above is something which a (Java EE) container should do for me - I was just messing around with simple classes in a simple Java Project. Now, it isn't quite complete, because I still have 100% coverage, and I still have that bug, because the new annotations haven't really done anything useful. Well that isn't entirely true - the reader of the code knows that the contract is a little stricter than it was when it was simple Javadoc. The reader may assume that the system will check these constraints when the method is called. But while there is still a bug, I've laid the path for adding some full pre- or post-conditions. The next step was to add a new annotation, an interface and make use of them in the interceptor.

@Target( { ElementType.METHOD })
@Retention(RetentionPolicy.RUNTIME)
@Documented
public @interface PreCondition {
    Class<? extends ConditionChecker> implementation();
}

/** Any pre- or post-condition is written in
  * an implementation of this interface
  */
public interface ConditionChecker {
    void checkCondition()
        throws ConstraintViolationException;
}

The annotation can be added to the business code, in this case to add a pre-condition. I created a similar annotation for post-conditions. When I add the pre-condition to a method, I also state which class contains the code for checking that pre-condition:

    @Valid
    @NotNull
    @Size(min=0, max=1)
    @PreCondition(implementation=MyPreAndPostCondition.class)
    public String getResult() {
        ...

The interceptor can check for the presence of such a pre-condition annotation before calling through to the method being called. If the annotation is found, the interceptor attempts to create an instance of the implementation class, and calls it's "checkCondition" method.

So the final part of this puzzle is to write a pre-condition which will help me to fail to get 100% coverage when I test with the test shown near the top of this post. Here it is, implemented as a static final inner class inside the SomeClass class, so that it has access to the fields "a" and "b":

public class MyPreAndPostCondition implements ConditionChecker {
    @Override
    public void checkCondition()
            throws ConstraintViolationException {

        //im going to list all combinations of
        //a and b which I support
        if(!a && b) return;
        if(!b && a) return;
        if(!b && !a) return;
        if(a && b) return;
    }
}

When I now test my code, I no longer get 100% coverage, because the last two lines in the pre-condition are not covered. Hooray! The tools can now tell me what I still need to test...

Now, while this technical solution works, I think it is really ugly. Like I said, if the code which is causing the problem were to be found within one or two hundred other lines of code, and were reliant on local variables rather than fields in my class, I would have no chance of using a pre-condition like this to help me locate the problem.

So to sum up, DBC isn't going to help me solve the problem that 100% code coverage can still contain errors. I think DBC frameworks (and there are a lot out there, some which do exactly what I have done here using the @PreCondition annotation) help to make contracts more concrete. If you use method validation from Hibernate Validator, you don't have to write as much Javadoc, because the reader knows that the container will give up before calling the method if anything fails to validate. To me as a programmer, that is much more satisfying than praying that some Javadoc reflects what I have really implemented.

I have known programmers who don't write tests because a DBC framework is in place and that makes them feel safe. But just because you declare a contract, does not mean the code actually works. Whether the code fails hard with a validation exception or at some time later because your code is buggy, makes no difference - both are inacceptable! From that perspective, DBC contracts are simply hints to the tester what could be useful tests, and they ensure that the code fails hard, early.

While I was refreshing my testing skills, I also learned the difference between mocks and stubs. For years I had always thought they were the same thing, but it turns out that stubs return pre-fed answers, whereby mocks check the sequence of calls to them too. On a different thread at DZone, someone made a point that unit testing was pointless because it never helped them find bugs, and it caused lots of work when refactoring, because all that does is break their tests. I'd say that this is simply a question of black box versus white box testing. Black box unit testing should never break if you refactor your code - the tests are simply clients to the code which you are refactoring, and tools like Eclipse will modify the calling code if you modify the interfaces being called, including tests. You can get pretty good testing results by just using black box tests - the large majority of the tests I write are black box tests and when I write such tests, I tend to have bug free code.

I've talked about writing contracts to help the reader determine what they should expect when they use your code. Unit tests themselves work similarly, because they show the reader examples of how to call your code and what to expect when your code changes system state. They hint to the reader how the writer intended the code to be used. While I don't advocate TDD (perhaps only because I have never been on a project which used it or at a company which valued quality enough to warrant TDD), I do encourage writing tests and using validation and pre-/post-conditions because they help document the code with the added bonus of finding the occassional bug. At the same time, I am an architect and we need to keep things like budgets in mind. You have an influence on your budget when you estimate, assuming you are allowed to do that. Your customer might push you to reduce your estimates, and that is an indication about their commitment to quality, because they won't budge on scope or delivery dates! So write as many tests as you can, within your budget and start with the code which is most complex and which gets called most often. and remember, 100% coverage is not really your best friend because bugs may still lurk.

© 2011, Ant Kutschera
Social Bookmarks :  Add this post to Slashdot    Add this post to Digg    Add this post to Reddit    Add this post to Delicious    Add this post to Stumble it    Add this post to Google    Add this post to Technorati    Add this post to Bloglines    Add this post to Facebook    Add this post to Furl    Add this post to Windows Live    Add this post to Yahoo!

A really simple but powerful rule engine

UPDATE: Version 2.2.0, JavaScript (Nashorn) rules in the JVM - see here.

UPDATE: Version 2.1.0, Java 8, Maven, GitHub - see here.

UPDATE: Version 2.0 of the library now exists which supports Scala. It is a breaking change in that "Action" instances as shown below are now "AbstractAction", and the "Action" class is only supported in Scala, where functions can be passed to the action constructor instead of having to override the AbstractAction. Scala collections are also supported in the engine.

I have the requirement to use a rule engine. I want something light weight and fairly simple, yet powerful. While there are products out there which are super good, I don't want something with the big learning overhead. And, I fancied writing my own!

Here are my few basic requirements:

  • Use some kind of expression language to write the rules,
  • It should be possible to store rules in a database,
  • Rules need a priority, so that only the best can be fired,
  • It should also be possible to fire all matching rules,
  • Rules should evaluate against one input which can be an object like a tree, containing all the information which rules need to evaluate against
  • Predefined actions which are programmed in the system should be executed when certains rules fire.

So to help clarify those requirements, imagine the following examples:

1) In some forum system, the administrator needs to be able to configure when emails are sent.

Here, I would write some rules like "when the config flag called sendUserEmail is set to true, send an email to the user" and "when the config flag called sendAdministratorEmail is true and the user has posted less than 5 postings, send an email to the administrator".

2) A tarif system needs to be configurable, so that the best tarif can be offered to customers.

For that, I could write rules like these: "when the person is younger than 26 the Youth tarif is applicable", "when the person is older than 59, the Senior tarif is applicable", and "when the person is neither a youth, nor a senior, then they should be given the Default tarif, unless they have had an account for more than 24 months, in which case they should be offered the Loyalty tarif".

3) A train ticket can be considered to be a product. Depending upon the travel request, different products are suitable.

A rule here, could be something like: "if the travel distance is more than 100km and first class is desired, then product A is to be sold."

Finally, a more complex example, involving some iteration of the input, rather than just property evaluation:

4) Timetable software needs to deterimine when students can leave school.

A rule for that might read: "If a class contains any student under age 10, the entire class gets to leave early. Otherwise, they leave at the normal time."

So, with those requirements in mind, I went to look for an expression language. I started with the unified expression language specified in JSP 2.1. Using the jasper jar used in Tomcat and Apache Commons EL jar, I got something up and running very quickly. Then I discovered the MVEL library from Codehaus.org, which incidentally is used in Drools (the leading Java rule engine?) and it worked even better. It offers far more functionality as far as I can tell.

So, I designed my rule engine to work like this:

1) An engine is configured with some rules.

2) A rule has these attributes:
    - namespace: an engine may contain many rules, but only some may be relevant to a particular call and this namespace can be used for filtering
    - name: a unique name within a namespace
    - expression: an MVEL expression for the rule
    - outcome: a string which the engine might use if this rules expression evaluates to true
    - priority: an integer. The bigger the value, the higher the priority.
    - description: a useful description to aid the management of rules.

3) The engine is given an input object and evaluates all rules (optionally within a namespace) and either:
    a) returns all rules which evaluate to true,
    b) returns the outcome (string) from the rule with the highest priority, out of all rules evaluating to true,
    c) execute an action (defined within the application) which is associated with the outcome of the rule with the highest priority, out of all rules evaluating to true.

4) "Action"s are instances of classes which the application programmer can supply. An action is given a name. When the engine is asked to execute an action based on the rules, the name of the action matching the "winning" rule's outcome is executed.

5) A rule can be built up of "sub-rules". A subrule is only ever used as a building block on which to base more complex rules. When evaluating rules, the engine will never select a subrule to be the best (highest priority) "winning" rule, i.e. one evaluating to true. Subrules make it easier to build complex rules, as I shall show shortly.

So, time for some code!

First, let's look at the code for the tarif system:

Rule r1 = new Rule("YouthTarif", "input.person.age < 26", "YT2011", 3, "ch.maxant.someapp.tarifs", null);
Rule r2 = new Rule("SeniorTarif", "input.person.age > 59", "ST2011", 3, "ch.maxant.someapp.tarifs", null);
Rule r3 = new Rule("DefaultTarif", "!#YouthTarif && !#SeniorTarif", "DT2011", 3, "ch.maxant.someapp.tarifs", null);
Rule r4 = new Rule("LoyaltyTarif", "#DefaultTarif && input.account.ageInMonths > 24", "LT2011", 4, "ch.maxant.someapp.tarifs", null);
List<Rule> rules = Arrays.asList(r1, r2, r3, r4);

Engine engine = new Engine(rules, true);

TarifRequest request = new TarifRequest();
request.setPerson(new Person("p"));
request.setAccount(new Account());

request.getPerson().setAge(24);
request.getAccount().setAgeInMonths(5);
String tarif = engine.getBestOutcome(request);


So, in the above code, I have added 4 rules to the engine, and told the engine to throw an exception if any rule cannot be pre-compiled. Then, I created a TarifRequest, which is the input object. That object is passed into the engine, when I ask the engine to give me the best outcome. In this case, the best outcome is the string "YT2011", the name of the most suitable tarif for the customer I added to the tarif request.

How does it all work? When the engine is given the rules, it does some validation on them, and pre-compiles the rules (to improve overall performance). Notice how the first two rules refer to an object called "input"? That is the object passed into the "getBestOutcome" method on the engine. The engine passes the input object to the MVEL class together with each rules expression. Anytime an expression evaluates to "true", the rule is put to the side as a candidate to be the winner. At the end, the candidates are sorted in order of priority, and the outcome field of the rule with the highest priority is returned by the engine.

Notice how the third and fourth rules contain the '#' character. That is not standard MVEL expression language. The engine examines all rules when they are passed to it, and it replaces any token starting with a hash symbol, with the expression found in the rule named the same as the token. It wraps the expression in brackets. The logger outputs the full rule after reference rules have been resolved and replaced, just in case you want to check the rule.

In the above business case, we were only interested in the best tarif for the customer. Equally, we might have been interested in a list of possible tarifs, so that we could offer the customer a choice. In that case, we could have called the "getMatchingRules" method on the engine, which would have returned all rules, sorted by priority. The tarif names are (in this case) the "outcome" field of the rules.

In the above example, I wanted to receive any of the four outcomes, from the four rules. Sometimes however, you might want to build complex rules based on building blocks, but you might never want those building blocks to be a winning outcome. The train trip example from above can be used to show what I mean here:

Rule rule1 = new SubRule("longdistance", "input.distance > 100", "ch.maxant.produkte", null);
Rule rule2 = new SubRule("firstclass", "input.map[\"travelClass\"] == 1", "ch.maxant.produkte", null);
Rule rule3 = new Rule("productA", "#longdistance && #firstclass", "productA", 3, "ch.maxant.produkte", null);
List<Rule> rules = Arrays.asList(rule1, rule2, rule3);

Engine e = new Engine(rules, true);

TravelRequest request = new TravelRequest(150);
request.put("travelClass", 1);
List rs = e.getMatchingRules(request); 


In the above code, I build rule3 from two subrules. But I never want the outcomes of those building blocks to be output from the engine. So I create them as SubRules. SubRules don't have an outcome field or priority. They are simply used to build up more complex rules. After the engine has used the sub-rules to replace all tokens beginning in a hash during initialisation, it discards the SubRules - they are not evaluated.

The TravelRequest above takes a distance in the constructor, and contains a map of additional parameters. MVEL let's you easily access map values using the syntax shown in rule 2.

Next, consider the business case of wanting to configure an forum system. The code below introduces actions. Actions are created by the application programmer and supplied to the engine. The engine takes the outcomes (as described in the first example), and searches for actions with the same names as those outcomes, and calls the "execute" method on those actions (they all implement the IAction interface). This functionality is useful when a system must be capable of predefined things, but the choice of what to do needs to be highly configurable and independent of deployment.

Rule r1 = new Rule("SendEmailToUser", "input.config.sendUserEmail == true", "SendEmailToUser", 1, "ch.maxant.someapp.config", null);
Rule r2 = new Rule("SendEmailToModerator", "input.config.sendAdministratorEmail == true and input.user.numberOfPostings < 5", "SendEmailToModerator", 2, "ch.maxant.someapp.config", null);
List<Rule> rules = Arrays.asList(r1, r2);
		
final List<String> log = new ArrayList<String>();
		
Action<ForumSetup, Void> a1 = new Action<ForumSetup, Void>("SendEmailToUser") {
  @Override
  public Void execute(ForumSetup input) {
    log.add("Sending email to user!");
    return null;
  }
};
Action<ForumSetup, Void> a2 = new Action<ForumSetup, Void>("SendEmailToModerator") {
  @Override
  public Void execute(ForumSetup input) {
    log.add("Sending email to moderator!");
    return null;
  }
};

Engine engine = new Engine(rules, true);

ForumSetup setup = new ForumSetup();
setup.getConfig().setSendUserEmail(true);
setup.getConfig().setSendAdministratorEmail(true);
setup.getUser().setNumberOfPostings(2);
			
engine.executeAllActions(setup, Arrays.asList(a1, a2));


In the code above, the actions are passed to the engine when we call the "executeAllActions" method. In this case, both actions are executed, because the setup object causes both rules to evaluate to true. Note that the actions are executed in the order of highest priority rule first. Each action is only ever executed once - it's name is noted after execution and it will not be executed again, until the engines "execute*Action*" method is called again. Also, if you only want the action associated with the best outcome to be executed, call the "executeBestAction" method instead of "executeAllActions".

Finally, let's consider the classroom example.

String expression = 
    "for(student : input.students){" +
    "	if(student.age < 10) return true;" +
    "}" +
    "return false;";

Rule r1 = new Rule("containsStudentUnder10", expression , "leaveEarly", 1, "ch.maxant.rules", "If a class contains a student under 10 years of age, then the class may go home early");
		
Rule r2 = new Rule("default", "true" , "leaveOnTime", 0, "ch.maxant.rules", "this is the default");
		
Classroom classroom = new Classroom();
classroom.getStudents().add(new Person(12));
classroom.getStudents().add(new Person(10));
classroom.getStudents().add(new Person(8));

Engine e = new Engine(Arrays.asList(r1, r2), true);
		
String outcome = e.getBestOutcome(classroom);


The outcome above is "leaveEarly", because the classroom contains one student whose age is less than 10. MVEL let's you write some pretty comprehensive expressions, and is really a programming language in it's own right. The engine simply requires a rule to return true, if the rule is to be considered a candidate for firing.

There are more examples in the JUnit tests contained in the source code.

So, the requirements are fulfiled, except for "It should be possible to store rules in a database". While this library doesn't support reading and writing rules to / from a database, rules are String based. So it wouldn't be hard to create some JDBC or JPA code which reads rules out of a database and populates Rule objects and passes them to the Engine. I haven't added this to the library, because normally these things as well as the management of rules is something quite project specific. And because my library will never be as cool or popular as Drools, I'm not sure it would be worth my while to add such functionality.

I've put the rule engine into an OSGi library with the LGPL licence and it can be downloaded from my tools site. This library depends on MVEL, which can be downloaded here (I used version 2.0.19). If you like it, let me know!

© 2011, Ant Kutschera

Social Bookmarks :  Add this post to Slashdot    Add this post to Digg    Add this post to Reddit    Add this post to Delicious    Add this post to Stumble it    Add this post to Google    Add this post to Technorati    Add this post to Bloglines    Add this post to Facebook    Add this post to Furl    Add this post to Windows Live    Add this post to Yahoo!

JSR-299 & @Produces

I've been reading JSR-299 and I have a few thoughts.

First, a quote from chapter 1:

"The use of these services significantly simplifies the task of creating Java EE applications by integrating the Java EE web tier with Java EE enterprise services. In particular, EJB components may be used as JSF managed beans, thus integrating the programming models of EJB and JSF."

The second sentence made my ears prick up. Do I really want EJBs to be the backing beans of web pages? To me that sounds like one is mixing up responsibilities in MVC. Sure, there are people out there who say that an MVC Controller should be skinny and the model should be fat (youtube ad). Perhaps I'm old school, but I prefer my JSF page to have a backing bean which is my view model and deals with presentation specific logic, and when it needs transaction and security support, it can call through to an EJB which deals with more businessy things.

The JSR then goes on to introduce the @Produces annotation. I don't like that annotation and the rest of this blog posting is about why.

When I need an object, like a service or a resource, I can get the container to inject it for me. Typically, I use the @Inject, @EJB, @Resource or @PersistenceContext annotations. In the object which has such things injected, I can write code which uses those objects. I let the container compose my object using those pieces and other beans. There are many advantages, like having code which is easily testable because I can compose the object out of mocks if required, or like being able to configure its composition rather than hard coding it. And we are all used to composing objects like this, from the earliest days of Spring, if not before.

Now, with @Produces, we have a new way to declare where things come from. Take the example in the JSR, where a Login bean is shown (chapter 1). The Login bean "produces" the current user (page 4). Well, let's start with some code:

    @Produces @LoggedIn User getCurrentUser()

If I read that code out loud, I say something with a similar meaning to: "The bean called Login produces the currently logged in user".

Well, that doesn't make too much sense to me, because coming at the design from a pure OO point of view, starting with use-cases, there is no Login bean mentioned in the use case, and even if there were, it does not "produce" the user! The user exists before they login to the software. They are simply authenticated and perhaps authorised after login. The login component of the software (which could be represented by the Login bean) can provide the details of the currently logged in user. The syntax used in that code above has a poor semantic meaning, in my view.

So if this is the modern trendy way of passing information around in a JSF app, I have to ask myself whether we were not able to do such things in the past? Or was it really hard to program before we had the @Produces annotation? Let's look at one solution; some code in a bean which needs to know the logged in user.

    @Inject Login loginBean;

To use that, the Login bean would need to be declared as being @SessionScoped, and guess what - it is already marked so in the JSR example.

To access the current user, I would simply write code like this:

    loginBean.getCurrentUser().getName() blah blah

Simple huh?

If you don't like the idea of injecting the Login bean, then why not simply create a session scoped bean called "CurrentUser" and during login, set the authenticated user attributes into that session scoped bean? Why go to the lengths of adding yet another way of injecting objects, namely the @Produces annotation? Because there is a different way of doing injection, the whole mechanism has become more complex. I believe it is these types of complexity which cause newbies to give up Java EE before fully understanding it, and which cause existing Java advocates to give up and move to Grails, etc.

To go with the @Produces annotation, we are given qualifiers. Qualifiers are effectively type safe names used for filtering, and as such, potentially better than String constants in an interface somewhere used in conjunction with an @Named annotation. Here is why I don't think Qualifiers should be used as the norm, but rather as the exception. The class of object being injected should have a suitable name to tell the reader what is going on. Consider this code, which injects the current user which the login bean produces:

    @Inject @LoggedIn User currentUser;

There is unnecessary triplication going on in that line. "LoggedIn", "User" and "currentUser". In the ideal world, I don't want three names here, I want one. Java forces me to declare the type, which is actually not really that necessary, because the compiler could, using conventions, infer the type from the name. But let's not go there, and instead accept that we have to declare a type. Why on earth then, do I want to additionally declare a qualifier? I don't, it's a waste of finger energy. Only when there is ambiguity would I be required to use a qualifier. But if I have a session scoped model, which contained the user, I could spare myself the energy of using a qualifier. I would simply inject the model bean and pull the current user out of it. That is what I have been doing for years without a problem. My Mum always told me not to fix something which isn't broken.

At this JBoss getting started guide, they give an example of producing a random number, with code like this (in the producer):

    @Produces @Random int next() {
        return getRandom().nextInt(maxNumber - 1) + 1;
    }

and this code in the consumer:

    @Inject @Random int someRandomNumber;

Sure, it's a toy example. But what is wrong with injecting the Generator bean (which contains that producer method), and calling the method on it which generates the random number? I could even work with @Alternative if I wanted to decide on the actual implementation at deployment time.

For me, injecting the bean, rather than the value is very important for readability. It gives the reader (maintainer) contextual information about where that value is coming from. In my mind, the following code is much better. It is quicker to write, because I don't need to create a qualifier annotation.

    @Inject Generator generator;

And then in that class, some code to use the generator:

    generator.nextRandomInt();

Section 1.3.3 of the JSR gives an example of setting up the entity managers for injection into beans:

    @Produces @PersistenceContext(unitName="UserData") @Users
    EntityManager userDatabaseEntityManager;

And then here, the code which uses that entity manager:

    @Inject @Users EntityManager userDatabaseEntityManager;

Oh, and don't forget the code for the @Users annotation... I end up with two extra files (the annotation and the producer). Is all that code really better than the following, which is simply put into a bean requiring the entity manager:

    @PersistenceContext(unitName=Units.UserData)
    EntityManager userDatabaseEntityManager;

I'm just not convinced that the "modern" & trendy, yet complex JSR-299 way of doing it is better.

Section 3.3 then goes on to tell us a bit more about why producers exist and in which circumstances we might want to use them:

- the objects to be injected are not required to be instances of beans

Since almost every class is now a bean, I am hard pushed to think of a case where I could not create a bean if I wanted something injected.

- the concrete type of the objects to be injected may vary at runtime

This is exactly what @Alternative and @Specialize annotations are for, and I do not need to use a producer to vary the concrete runtime implementation.

- the objects require some custom initialization that is not performed by the bean constructor

Adding the @Inject annotation to a constructor tells the container which constructor to call when special construction is required. How can a producer produce something which a bean couldn't?

Section 3.4.2 then talks about declaring producer fields. The example they give shows the field having default (package) visibility, yet as far as I can tell, any bean in the deployment bundle, even one outside the producers package, can use the "product". This is a little like indecently assaulting Java.

So to summarise, I guess I just want to put out a plea to future spec authors. If you can't work out a way to do things simply, especially if we already have that mechanism, please don't go and make things more complicated than they need to be. I don't want to learn 92 pages of JSR just to be good at using beans, when I already have to learn thousands of other pages of JSR just to get working with Java EE.

© 2011, Ant Kutschera

Tags : ,
Social Bookmarks :  Add this post to Slashdot    Add this post to Digg    Add this post to Reddit    Add this post to Delicious    Add this post to Stumble it    Add this post to Google    Add this post to Technorati    Add this post to Bloglines    Add this post to Facebook    Add this post to Furl    Add this post to Windows Live    Add this post to Yahoo!

When tool strategy gets on your nerves

In the beginning
A long time ago, before the days of code prettifiers like Jalopy, we wrote code and we tested code. That code was fun to write. And because it was fun to write, we were motivated to write it well and test it well.

Welcome Jalopy
It was around 2008 when a powerful tool which auto-magically reformatted our code was forced on us. In the beginning we rejoiced, for this tool added "TODO: Document me!" to all methods missing Javadoc (lots). It made it very easy to see where programmers had failed to document their code. The numerous TODOs were also great, because they made it impossible to find the real meaty stuff that still needed to be done. And in classes where our constants were defined, in a specific order so that it matched our documentation, Jalopy kindly reordered the constants alphabetically, making it hard to match code to the documentation.

Enter Sonar
A while later, realising that programmers program bad code even though Jalopy is present to save the world, we had Sonar forced on us. It is a great tool, because it tells us how bad our code is and what we need to change in order to make it good. Ahhh, what is "good" I hear you ask? Well that is defined by a central department who knows nothing of our projects and who doesn't care about architecture (where our real problems lie). The people who work there only want drones who all program in the same way, according to their rules and the rules of academics who make a living writing books about design patterns, which no one really understands and everyone implements wrongly in their own way anyway. But when we realised just how "bad" our code was, it was decided that a tool would be used to automatically modify the bad code and make it good. Bear in mind while reading this, that the bad code was already working well in production, yet the "good" code that was generated still needed testing. So we started testing the generated "good" code and came to realise it didn't always work. But never fear. To reduce the risks, the task to change the bad code into good code was given to human programmers. Unfortunately none had time to change bad code into good code until the end of the release, when the real work came to an end. Having written bad code which was bug free, some programmers now had time to turn the bad code into good code. So they started delivering patches to convert the code, right at the end of the release. That scared some people, so it was decided that they could check the good code straight into the trunk instead of doing so on the branch using patches.

And then Sonar really ruined my day
Today, we started merging patches from the production branch into the trunk, so that we could start on the next release. Didn't work. Merge conflicts. Why? Because people had modified trunk and checked "good" code in. That unforunately means that the patches which have been tested and are known to work now need to be merged with "good" code. The risk that we will introduce bugs during the merging is not as low as it really should be. Some days, I really hate tools.

Still, good things come out of bad things: this was my 100th posting!

© 2011, Ant Kutschera

Social Bookmarks :  Add this post to Slashdot    Add this post to Digg    Add this post to Reddit    Add this post to Delicious    Add this post to Stumble it    Add this post to Google    Add this post to Technorati    Add this post to Bloglines    Add this post to Facebook    Add this post to Furl    Add this post to Windows Live    Add this post to Yahoo!