Testing your plugin with multiple version of Play

So, you’ve written a plugin for Play…are you sure it works?

I’ve been giving Deadbolt some love recently, and as part of the work I’ve added a test application for functional testing. This is an application that uses all the features of Deadbolt, and is driven by HTTP calls by REST-Assured. Initially, it was based on Play 2.3.5 but this ignores the supported Play versions of 2.3.1 through to 2.3.4. Additionally, those hard-working people on the Play team at Typesafe keep cranking out new feature-filled versions.

On top of that, support for Scala 2.10.4 and 2.11.1 is required so cross-Scala version testing is needed.

Clearly, testing your plugin against a single version of Play is not enough. Seems like some kind of continuous integration could help us out here…

Building on Travis CI

Deadbolt builds on Travis CI, a great CI platform that’s free for open-source projects. This runs the tests, and publishs snapshot versions to Sonatype. I’m not going into detail on this, because there’s already a great guide over at Cake Solutions. You can find the guide here: http://www.cakesolutions.net/teamblogs/publishing-artefacts-to-oss-sonatype-nexus-using-sbt-and-travis-ci-here…

I’ve made some changes to the build script because the plugin code is not at the top level of the repositry; rather, it resides one level down. The repository looks like this:

deadbolt-2-java
  |-code        # plugin code lives here
  |-test-app    # the functional test application

As a result, the .travis.yml file that defines the build, looks like this.

language: scala
jdk:
- openjdk6
scala:
- 2.11.1
script:
- cd code
- sbt ++$TRAVIS_SCALA_VERSION +test
- cd ../test-app
- sbt ++$TRAVIS_SCALA_VERSION +test
- cd ../code
- sbt ++$TRAVIS_SCALA_VERSION +publish-local
after_success:
- ! '[[ $TRAVIS_BRANCH == "master" ]] && { sbt +publish; };'
env:
global:
- secure: foo
- secure: bar

This sets the Java version (people get angry when I don’t provide Java 6-compatible versions), and defines a script as the build process. Note the cd commands used to switch between the plugin directory and the test-app directory.

This script already covers the cross-Scala version requirement – prefixing a command with +, e.g. +test, will execute that command against all versions of Scala defined in your build.sbt. It’s important to note that although only Scala 2.11.1 is defined in .travis.yml, SBT itself will take care of setting the current build version based on build.sbt.

crossScalaVersions := Seq("2.11.1", "2.10.4")

Testing multiple versions of Play

However, the version of Play used by the test-app is still hard-coded to 2.3.5 in test-app/project/plugins.sbt.

addSbtPlugin("com.typesafe.play" % "sbt-plugin" % "2.3.5")

Happily, .sbt files are not just configuration files but actual code. This means we can change the Play version based on environment properties. A default value of 2.3.5 is given to allow the tests to run locally without having to set the version.

addSbtPlugin("com.typesafe.play" % "sbt-plugin" % System.getProperty("playTestVersion", "2.3.5"))

Finally, we update .travis.yml to take advantage of this.

language: scala
jdk:
- openjdk6
scala:
- 2.11.1
script:
- cd code
- sbt ++$TRAVIS_SCALA_VERSION +test
- cd ../test-app
- sbt ++$TRAVIS_SCALA_VERSION -DplayTestVersion=2.3.1 +test
- sbt ++$TRAVIS_SCALA_VERSION -DplayTestVersion=2.3.2 +test
- sbt ++$TRAVIS_SCALA_VERSION -DplayTestVersion=2.3.3 +test
- sbt ++$TRAVIS_SCALA_VERSION -DplayTestVersion=2.3.4 +test
- sbt ++$TRAVIS_SCALA_VERSION -DplayTestVersion=2.3.5 +test
- cd ../code
- sbt ++$TRAVIS_SCALA_VERSION +publish-local
after_success:
- ! '[[ $TRAVIS_BRANCH == "master" ]] && { sbt +publish; };'
env:
global:
- secure: foo
- secure: bar

This means the following steps occur during the build:

  • sbt ++$TRAVIS_SCALA_VERSION +test
    • Run the plugin tests against Scala 2.11.1
    • Run the plugin tests against Scala 2.10.4
  • sbt ++$TRAVIS_SCALA_VERSION -DplayTestVersion=2.3.1 +test
    • Run the functional tests of the test-app against Scala 2.11.1 and Play 2.3.1
    • Run the functional tests of the test-app against Scala 2.10.4 and Play 2.3.1
  • sbt ++$TRAVIS_SCALA_VERSION -DplayTestVersion=2.3.2 +test
    • Run the functional tests of the test-app against Scala 2.11.1 and Play 2.3.2
    • Run the functional tests of the test-app against Scala 2.10.4 and Play 2.3.2
  • sbt ++$TRAVIS_SCALA_VERSION -DplayTestVersion=2.3.3 +test
    • Run the functional tests of the test-app against Scala 2.11.1 and Play 2.3.3
    • Run the functional tests of the test-app against Scala 2.10.4 and Play 2.3.3
  • sbt ++$TRAVIS_SCALA_VERSION -DplayTestVersion=2.3.4 +test
    • Run the functional tests of the test-app against Scala 2.11.1 and Play 2.3.4
    • Run the functional tests of the test-app against Scala 2.10.4 and Play 2.3.4
  • sbt ++$TRAVIS_SCALA_VERSION -DplayTestVersion=2.3.5 +test
    • Run the functional tests of the test-app against Scala 2.11.1 and Play 2.3.5
    • Run the functional tests of the test-app against Scala 2.10.4 and Play 2.3.5

If all these steps pass, the after_success branch of the build script will execute. If any of the steps fail, the build will break and the snapshots won’t be published.

You can take a look at a repository using this approach here: https://github.com/schaloner/deadbolt-2-java.

The resulting Travis build is available here: https://travis-ci.org/schaloner/deadbolt-2-java.

Deadbolt 2.3.2 released

After a fruitful weekend of coding (and DIY, but that’s not relevant to this), Deadbolt 2.3.2 has been released. The major change is that it’s available via Maven Central, so there’s no need to add a resolver for it.

This line can be removed from your build.sbt – don’t let the door hit its ass on the way out:

resolvers += Resolver.url("Objectify Play Repository", url("http://deadbolt.ws/releases/"))(Resolver.ivyStylePatterns)

NB Older versions of Deadbolt are not yet in Maven, so you’ll still need the resolver for versions < 2.3.2. I’ll be publishing them to Maven as and when time allows.

I used the following guides to prepare all three Deadbolt libraries (deadbolt-core, -java and -scala) to Sonatype:
Deploying to Sonatype :: http://www.scala-sbt.org/0.13/docs/Using-Sonatype.html
Publishing Scala libraries to Sonatype :: http://www.loftinspace.com.au/blog/publishing-scala-libraries-to-sonatype.html

DeadboltHandler#getSubject
The other change relates to DeadboltHandler#getSubject. Until now, this has been a synchronous call that returns the Subject itself. As of 2.3.2, it now returns something a bit nicer for a reactive architecture.

Scala
The signature of the DeadboltHandler trait has changed from def getSubject[A](request: Request[A]): Option[Subject] and is now def getSubject[A](request: Request[A]): Future[Option[Subject]]. It’s a simple procedure to update your code. In place of

override def getSubject[A](request: Request[A]): Option[Subject] = {
    Some(however you retrieve your user)
}

you can replace it with

override def getSubject[A](request: Request[A]): Future[Option[Subject]] = {
    Future(Some(however you retrieve your user))
}

Java
The signature of the DeadboltHandler interface has changed from Subject getSubject(Http.Context context) and is now F.Promise getSubject(Http.Context context).

Existing code which looks something like

public Subject getSubject(Http.Context context) {
    return AuthorisedUser.findByUserName("steve");
}

would now look like

public F.Promise getSubject(Http.Context context) {
    return F.Promise.promise(new F.Function0() {
        @Override
        public Subject apply() throws Throwable {
            return AuthorisedUser.findByUserName("steve");
        }
    });
}

All the code is available in GitHub.

If you have any questions or issues, please let me know.

Keep Playing!

JavaScript routing in Play 2 (Scala edition)

In my previous post, I covered using JavaScript routing in Play 2 Java applications. Here’s the Scala version. It’s pretty much a copy of the previous article, to make it independent.

One of the nicest features in Play 2, but one which doesn’t seem to be widely covered, is the
JavaScript routing that can be generated by the framework to make AJAX-based client code more maintainable. Here, I’m going to cover using this feature in a Scala-based Play 2 application.

Overview

It’s common to see calls in JavaScript making AJAX requests. Frequently, a library such as jQuery is used to provide support. For example, given a route

GET    /foo    controllers.Application.getAll()

a GET request can be made using the shorthand $.get method.

$.get("/foo", function( data ) {
  // do something with the response
});

A variant, in the case where parameters are required, needs a little bit more work.

GET    /foo    controllers.Application.get(personId: Long, taskId: Long)

var personId = $('#person').val();
var taskId = $('#item').val();
$.get("/foo?person=" + personId + '&task=' + taskId, 
      function( data ) {
        // do something with the response
      });

All this seems far removed from the easy style of Play applications, where interactions are idiomatic and typesafe. Happily, Play offers a feature called JavaScript routing. This allows us to use JS objects generated by the back-end, and so we can replace the code above with

var personId = $('#person').val();
var taskId = $('#item').val();
appRoutes.controllers.Application.get(personId, taskId).ajax({
  success: function( data ) {
    // do something with the response
  }
})

This will result in a GET request to /foo with personId and taskId as request parameters.

GET /foo?personId=personId&taskId=taskId

Changing the route file to use a RESTful-style URL of /foo/:personId/:taskId will result in the following call with no changes required to your JS code:

GET /foo/personId/taskId

Let’s take a look at what we need to do to achieve this.

The basics

Time to create our basic application. Using the command line, generate a new Play app

play new jsRoutingScala

Accept the default name of jsRoutingScala, and select the option for a Scala application.

 _ __ | | __ _ _  _| |
| '_ \| |/ _' | || |_|
|  __/|_|\____|\__ (_)
|_|            |__/

play! 2.1.5 (using Java 1.7.0_17 and Scala 2.10.0), http://www.playframework.org

The new application will be created in /tmp/jsRoutingScala

What is the application name? [jsRoutingScala]
> 

Which template do you want to use for this new application? 

  1             - Create a simple Scala application
  2             - Create a simple Java application

> 1
OK, application jsRoutingScala is created.

Have fun!

This will give us a basic controller called Application, and a couple of views. We still need to create a model class, so let’s do that now. In the app directory, create a package called models. In the models package, create a class called Person. Note the JSON read/write support – this will allow the controller to serialize and deserialize instances of Person between Scala and JSON.

package models

import anorm._
import anorm.SqlParser._
import play.api.Play.current
import play.api.db.DB
import play.api.libs.json._
import anorm.~

case class Person(id: Pk[Long] = NotAssigned, name: String)

object Person {

  val simple = {
    get[Pk[Long]]("person.id") ~
      get[String]("person.name") map {
      case id~name => Person(id, name)
    }
  }

  def insert(person: Person) = {
    DB.withConnection { implicit connection =>
      SQL(
        """
          insert into person values (
            (select next value for person_seq),
            {name}
          )
        """
      ).on(
        'name -> person.name
      ).executeInsert()
    } match {
      case Some(long) => long
      case None       => -1
    }
  }

  def delete(id: Long) = {
    DB.withConnection { implicit connection =>
      SQL("delete from person where id = {id}").on('id -> id).executeUpdate()
    }
  }

  def getAll: Seq[Person] = {
    DB.withConnection { implicit connection =>
      SQL("select * from person").as(Person.simple *)
    }
  }

  implicit object PersonReads extends Reads[Person] {
    def reads(json: JsValue): JsResult[Person] =
      JsSuccess[Person](Person(NotAssigned, (json \ "name").as[String]), JsPath())
  }

  implicit object PersonWrites extends Writes[Person] {
    def writes(person: Person) = Json.obj(
      "id" -> Json.toJson(person.id.get),
      "name" -> Json.toJson(person.name)
    )
  }
}

We’ll need a database, so edit the conf/application.conf and uncomment the following lines:

db.default.driver=org.h2.Driver
db.default.url="jdbc:h2:mem:play"

Because this example uses Anorm, we need to configure the database ourselves. In the conf folder, create a folder called evolutions that contains another folder, default. Create a file here called 1.sql, and copy the following into it:

# --- !Ups
create table person (
  id bigint not null,
  name varchar(255),
  constraint pk_person primary key (id))
;

create sequence person_seq;

# --- !Downs
SET REFERENTIAL_INTEGRITY FALSE;

drop table if exists person;

SET REFERENTIAL_INTEGRITY TRUE;

drop sequence if exists person_seq;

We’re now ready to build our controller.

The controller

In the controllers package, open the Application object. There’s already an index method, but we don’t need to change this.

The app will allow us to create, delete and get Person instances, and so we need methods to support these operations.

def getAll() = Action {
  Ok(Json.toJson(Person.getAll))
}

def delete(id: Long) = Action {
  Person.delete(id)
  Ok("Deleted " + id)
}

def create = Action { implicit request =>
  request.body.asJson match {
    case None => BadRequest
    case Some(json: JsValue) => {
      val person: Person = json.as[Person]
      val id = Person.insert(person)
      Ok(Json.obj(
        "id" -> id,
        "name" -> person.name
      ))
    }
  }
}

These methods cover our business logic, so we can add them to the routes file

GET     /person       controllers.Application.getAll()
DELETE  /person/:id   controllers.Application.delete(id: Long)
POST    /person       controllers.Application.create()

Add JS routing support

So far, so normal. We have business logic that can be accessed via HTTP calls, but no specialised JS support. We need to add another method to the controller that specifies which routes we want to be available in the JS routing object.

def jsRoutes = Action { implicit request =>
  Ok(Routes.javascriptRouter("appRoutes")(
    controllers.routes.javascript.Application.create,
    controllers.routes.javascript.Application.delete,
    controllers.routes.javascript.Application.getAll))
    .as("text/javascript")
}

This method will generate a JavaScript file that can be loaded into the client. We’ll see this when we get to the view. Because it’s something called from the client, we need to add another entry to the routes. It’s extremely important to note the placement of the route – it must precede the existing “/assets/*file” entry, otherwise that route will consume the request.

GET           /assets/js/routes             controllers.Application.jsRoutes()
GET           /assets/*file                 controllers.Assets.at(path="/public", file)

The view

Now we get to where the action is. The first step is to make our view aware of the JS routing code, and we do this by simply adding a script tag to views/main.scala.html

<script src="@controllers.routes.Application.jsRoutes()" type="text/javascript"></script>

We’re finally ready to use our JS routing object. The following code is an utterly basic single-page app that hooks into the get, create and delete functionality of the back-end. Copy and paste this code into your existing views/index.scala.html file, and then take a look at what it’s doing. You may notice this code is identical to that in the Java example.

@(message: String)

@main("Play 2 JavaScript Routing") {

    <fieldset>
        <form>
            <label for="personName">Name:
            <input type="text" name="personName" id="personName"/>
            <input type="button" value="Create" id="createPerson"/>
        </form>
    </fieldset>
    <ul id="peopleList">

    <script>
    var doc = $ (document);
    doc.ready (function() {
      // Delete a person
      doc.on ('click', '.deletePerson', function(e) {
        var target = $(e.target);
        var id = target.data('id');
        appRoutes.controllers.Application.delete(id).ajax( {
          success : function ( data ) {
            target.closest('li').remove();
          }
        });
      });

      // Create a new person
      $('#createPerson').click(function() {
        var personNameInput = $('#personName');
        var personName = personNameInput.val();
        if(personName && personName.length > 0) {
          var data = {
            'name' : personName
          };
          appRoutes.controllers.Application.create().ajax({
            data : JSON.stringify(data),
            contentType : 'application/json',
            success : function (person) {
              $('#peopleList').append('<li>' + person.name + ' <a href="#" data-id="' + person.id + '" class="deletePerson">Delete</a></li>');
                personNameInput.val('');
            }
          });
        }
      });

      // Load existing data from the server
      appRoutes.controllers.Application.getAll().ajax({
        success : function(data) {
          var peopleList = $('#peopleList');
          $(data).each(function(index, person) {
            peopleList.append('<li>' + person.name + ' <a href="#" data-id="' + person.id + '" class="deletePerson">Delete</a></li>');
          });
        }
      });
    }) ;
    </script>
}

There are three lines here that will generate calls to the server, namely

  • appRoutes.controllers.Application.delete(id).ajax
  • appRoutes.controllers.Application.create().ajax
  • appRoutes.controllers.Application.getAll().ajax

Let’s take delete as the example, since it takes a parameter.

  • appRoutes
    • Take a look at your Application class, and you’ll see the name appRoutes being given to the JS router. This forms the base of the namespace of the JS routing object, and means you can have multiple JS routing objects from different controllers imported into the same view because you can keep the names of each unique.
  • controllers.Application
    • This is the fully qualified name of the target controller. Note that it matches the FQN of the Scala object.
  • delete(id)
    • The method call, including parameters. This function will return an object called ajax which can be used to control the behaviour of the HTTP call.
  • ajax
    • This function makes the call to the server. It’s here that you add success and error callbacks, change the content-type of the request to match your back-end requirements, add PUT or POST data, etc.

Summary

That seemed like a lot of work, but most of it was creating the project. The actual JS routing represents a very small time investment, for a pretty good pay-off. It’s very easy to retrofit existing code to use this approach, and very simple to continue in this style once you experience the benefits.

Example application

You can find the example code from this article on GitHub: https://github.com/schaloner/jsRoutingScala

Java

This article covered how to use this feature from a Scala viewpoint. In a sibling article, Java support is also demonstrated. The differences between the two are minimal, and non-existent from a client-side perspective. You can read this article here.

JavaScript routing in Play 2 (Java edition)

One of the nicest features in Play 2, but one which doesn’t seem to be widely covered, is the
JavaScript routing that can be generated by the framework to make AJAX-based client code more maintainable. Here, I’m going to cover using this feature in a Java-based Play 2 application.

Overview

It’s common to see calls in JavaScript making AJAX requests. Frequently, a library such as jQuery is used to provide support. For example, given a route

GET    /foo    controllers.Application.getAll()

a GET request can be made using the shorthand $.get method.

$.get("/foo", function( data ) {
  // do something with the response
});

A variant, in the case where parameters are required, needs a little bit more work.

GET    /foo    controllers.Application.get(personId: Long, taskId: Long)

var personId = $('#person').val();
var taskId = $('#item').val();
$.get("/foo?person=" + personId + '&task=' + taskId, 
      function( data ) {
        // do something with the response
      });

All this seems far removed from the easy style of Play applications, where interactions are idiomatic and typesafe. Happily, Play offers a feature called JavaScript routing. This allows us to use JS objects generated by the back-end, and so we can replace the code above with

var personId = $('#person').val();
var taskId = $('#item').val();
appRoutes.controllers.Application.get(personId, taskId).ajax({
  success: function( data ) {
    // do something with the response
  }
})

This will result in a GET request to /foo with personId and taskId as request parameters.

GET /foo?personId=personId&taskId=taskId

Changing the route file to use a RESTful-style URL of /foo/:personId/:taskId will result in the following call with no changes required to your JS code:

GET /foo/personId/taskId

Let’s take a look at what we need to do to achieve this.

The basics

Time to create our basic application. Using the command line, generate a new Play app

play new jsRoutingJava

Accept the default name of jsRoutingJava, and select the option for a Java application.

 _ __ | | __ _ _  _| |
| '_ \| |/ _' | || |_|
|  __/|_|\____|\__ (_)
|_|            |__/

play! 2.1.5 (using Java 1.7.0_17 and Scala 2.10.0), http://www.playframework.org

The new application will be created in /tmp/jsRoutingJava

What is the application name? [jsRoutingJava]
> 

Which template do you want to use for this new application? 

  1             - Create a simple Scala application
  2             - Create a simple Java application

> 2
OK, application jsRoutingJava is created.

Have fun!

This will give us a basic controller called Application, and a couple of views. We still need to create a model class, so let’s do that now. In the app directory, create a package called models. In the models package, create a class called Person.

package models;

import play.db.ebean.Model;
import javax.persistence.Entity;
import javax.persistence.Id;
import java.util.List;

@Entity
public class Person extends Model
{
    private static Finder FIND = new Finder<>(Long.class, Person.class);
															
    @Id
    public Long id;
	
    public String name;
	
    public static List getAll()
    {
        return FIND.all();
    }
	
    public static Person getById(Long id)
    {
        return FIND.byId(id);
    }
}

We’ll need a database, so edit the conf/application.conf and uncomment the following lines:

db.default.driver=org.h2.Driver
db.default.url="jdbc:h2:mem:play"
ebean.default="models.*"

We’re now ready to build our controller.

The controller

In the controllers package, open the Application class. There’s already an index method, but we don’t need to change this.

The app will allow us to create, delete and get Person instances, and so we need methods to support these operations.

public static Result getAll()
{
    return ok(Json.toJson(Person.getAll()));
}

public static Result delete(Long id)
{
    Result result;
    Person person = Person.getById(id);
    if (person == null)
    {
        person.delete();
        result = ok(person.name + " deleted");
    }
    else
    {
        result = notFound(String.format("Person with ID [%d] not found", id));
    }
    return result;
}

public static Result create()
{
    JsonNode json = request().body().asJson();
    Person person = Json.fromJson(json, Person.class);
    person.save();
    return ok(Json.toJson(person));
}

These methods cover our business logic, so we can add them to the routes file

GET     /person       controllers.Application.getAll()
DELETE  /person/:id   controllers.Application.delete(id: Long)
POST    /person       controllers.Application.create()

Add JS routing support

So far, so normal. We have business logic that can be accessed via HTTP calls, but no specialised JS support. We need to add another method to the controller that specifies which routes we want to be available in the JS routing object.

public static Result jsRoutes()
{
    response().setContentType("text/javascript");
    return ok(Routes.javascriptRouter("appRoutes", //appRoutes will be the JS object available in our view
                                      routes.javascript.Application.getAll(),
                                      routes.javascript.Application.delete(),
                                      routes.javascript.Application.create()));
}

This method will generate a JavaScript file that can be loaded into the client. We’ll see this when we get to the view. Because it’s something called from the client, we need to add another entry to the routes. It’s extremely important to note the placement of the route – it must precede the existing “/assets/*file” entry, otherwise that route will consume the request.

GET           /assets/js/routes             controllers.Application.jsRoutes()
GET           /assets/*file                 controllers.Assets.at(path="/public", file)

The view

Now we get to where the action is. The first step is to make our view aware of the JS routing code, and we do this by simply adding a script tag to views/main.scala.html

<script src="@controllers.routes.Application.jsRoutes()" type="text/javascript"></script>

We’re finally ready to use our JS routing object. The following code is an utterly basic single-page app that hooks into the get, create and delete functionality of the back-end. Copy and paste this code into your existing views/index.scala.html file, and then take a look at what it’s doing.

@(message: String)

@main("Play 2 JavaScript Routing") {

    <fieldset>
        <form>
            <label for="personName">Name:
            <input type="text" name="personName" id="personName"/>
            <input type="button" value="Create" id="createPerson"/>
        </form>
    </fieldset>
    <ul id="peopleList">

    <script>
    var doc = $ (document);
    doc.ready (function() {
      // Delete a person
      doc.on ('click', '.deletePerson', function(e) {
        var target = $(e.target);
        var id = target.data('id');
        appRoutes.controllers.Application.delete(id).ajax( {
          success : function ( data ) {
            target.closest('li').remove();
          }
        });
      });

      // Create a new person
      $('#createPerson').click(function() {
        var personNameInput = $('#personName');
        var personName = personNameInput.val();
        if(personName && personName.length > 0) {
          var data = {
            'name' : personName
          };
          appRoutes.controllers.Application.create().ajax({
            data : JSON.stringify(data),
            contentType : 'application/json',
            success : function (person) {
              $('#peopleList').append('<li>' + person.name + ' <a href="#" data-id="' + person.id + '" class="deletePerson">Delete</a></li>');
                personNameInput.val('');
            }
          });
        }
      });

      // Load existing data from the server
      appRoutes.controllers.Application.getAll().ajax({
        success : function(data) {
          var peopleList = $('#peopleList');
          $(data).each(function(index, person) {
            peopleList.append('<li>' + person.name + ' <a href="#" data-id="' + person.id + '" class="deletePerson">Delete</a></li>');
          });
        }
      });
    }) ;
    </script>
}

There are three lines here that will generate calls to the server, namely

  • appRoutes.controllers.Application.delete(id).ajax
  • appRoutes.controllers.Application.create().ajax
  • appRoutes.controllers.Application.getAll().ajax

Let’s take delete as the example, since it takes a parameter.

  • appRoutes
    • Take a look at your Application class, and you’ll see the name appRoutes being given to the JS router. This forms the base of the namespace of the JS routing object, and means you can have multiple JS routing objects from different controllers imported into the same view because you can keep the names of each unique.
  • controllers.Application
    • This is the fully qualified name of the target controller. Note that it matches the FQN of the Java class.
  • delete(id)
    • The method call, including parameters. This function will return an object called ajax which can be used to control the behaviour of the HTTP call.
  • ajax
    • This function makes the call to the server. It’s here that you add success and error callbacks, change the content-type of the request to match your back-end requirements, add PUT or POST data, etc.

Summary

That seemed like a lot of work, but most of it was creating the project. The actual JS routing represents a very small time investment, for a pretty good pay-off. It’s very easy to retrofit existing code to use this approach, and very simple to continue in this style once you experience the benefits.

Happy new year.

Example application

You can find the example code from this article on GitHub: https://github.com/schaloner/jsRoutingJava

Scala

This article covered how to use this feature from a Java viewpoint. In a sibling article, Scala support is also demonstrated. The differences between the two are minimal, and non-existent from a client-side perspective. The Scala article will be available soon is available here.

Banks and social engineering – a short story of fail

The fair is in town, and I need money. Where do I get money from? The bank.

I enter the bank. I see a piece of paper loosely stuck to the ATM advertising the bank’s mobile application, and conveniently including a QR code.

Anyone else see a problem here?

Examples of malicious QR code were already being reported two years ago. Is it really such a stretch of the imagination, given the number of phishing sites that mimic banks, that fraudulent mobile applications won’t appear? QR codes represent a potential path to get these apps onto devices.

For any company to offer an app via a QR code printed on a scrap of paper and pasted to the side of an ATM shows a stunning lack of understanding of the potential for damage. Even an offical poster bearing a QR code can be hijacked by scammers placing QR code stickers over the true QR code. Even that modicum of effort isn’t needed in this situation. A scammer can print out a piece of paper, fasten it to an ATM after the bank has closed, and remove it before the bank opens.

What could the app do?
An app that you cheerfully feed your bank details into can at least harvest your account details. If your bank’s online browser-based web app requires a card reader code to access, that code can be requested by the mobile app and used to – manually or automatically – log into the bank’s web app. A simple “service unavailable” message delivered to the user leads them to believe no access has occurred, while in reality their account is being drained.

An unlikely scenario? Hardly. Nothing I have described above is new, and even fairly rudimentary coding skills would suffice to pull this off.

What should the bank do?
Not use QR codes! If my bank calls me with a question, I tell them I’ll call them back and hang up. I never accept “pushed” data – instead, I get the phone number from my bank documentation and call that number. The same principle applies here – banks should advertise the availability of mobile apps, but actual installation should only occur from a trusted source.

Needless to say, I don’t scan QR codes.

A good, lazy way to write tests

Testing. I’ve been thinking a lot about testing recently. As part of code reviews I’ve done for various projects, I’ve seen thousands of lines of untested code. This is not just a case of test coverage statistics pointing this out, it’s more a case of there not being any tests at all in this projects. And the two reason I keep hearing for this sad state of affairs? “We don’t have time”, swiftly followed by “We’ll do it when we’ve finished the code”.

What I present here is not a panacea for testing. It covers unit testing, and specifically, unit testing of interfaces. Interfaces are good things. Interfaces define contracts. Interfaces, regardless of how many implementations they have, can be tested easily and with very little effort. Let’s see how, using this class structure as an example.

A simple inheritence tree

CustomerService is our interface. It has two methods, to keep the example simple, and is described below. Note the javadoc – this is where the contract is described.

public interface CustomerService
{
    /**
     * Retrieve the customer from somewhere.
     * @param userName the userName of the customer
     * @return a non-null Customer instance compliant with the userName
     * @throws CustomerNotFoundException if a customer with the given user name can not be found
     */
    Customer get(String userName) throws CustomerNotFoundException;

    /**
     * Persist the customer.
     * @param customer the customer to persist
     * @return the customer as it now exists in its persisted form
     * @throws DuplicateCustomerException if a customer with the user name already exists
     */
    Customer create(Customer customer) throws DuplicateCustomerException;
}

As we can see from the diagram, we have two implementations of this class, RemoteCustomerService and CachingCustomerService. The implementations of these are not shown, because they don’t matter. How can I say this? Simple – we are testing the contract. We write tests for every method in the interface, along with every permutation of the contract. For example, for get() we need to test what happens when a customer with the given user name is present, and what happens when it isn’t present.

public abstract class CustomerServiceTest
{
    @Test
    public void testCreate()
    {
        CustomerService customerService = getCustomerService();
        Customer customer = customerService.create(new Customer("userNameA"));

        Assert.assertNotNull(customer);
        Assert.assertEquals("userNameA",
                            customer.getUserName());
    }

    @Test(expected = DuplicateCustomerException.class)
    public void testCreate_duplicate()
    {
        CustomerService customerService = getCustomerService();
        Customer customer = new Customer("userNameA");
        customerService.create(customer);
        customerService.create(customer);
    }

    @Test
    public void testGet()
    {
        CustomerService customerService = getCustomerService();
        customerService.create(new Customer("userNameA"));
        Customer customer = customerService.get("userNameA");
        
        Assert.assertNotNull(customer);
        Assert.assertEquals("userNameA",
                            result.getUserName());
    }

    @Test(expected = CustomerNotFoundException.class)
    public void testGet_noUser()
    {
        CustomerService customerService = getCustomerService();
        customerService.get("userNameA");
    }

    public abstract CustomerService getCustomerService();
}

We now have a test for the contract, and at no point have we mentioned any of the implementations. This means two things:

  • We don’t need to duplicate the tests for each and every implementation. This is a Very Good Thing.
  • None of the implementations are being tested. We can correct this by adding one test class per implementation. Since each test class will be almost identical, I’ll just show the test of RemoteCustomerService.
public class RemoteCustomerServiceTest extends CustomerServiceTest
{
    public CustomerService getCustomerService()
    {
        return new RemoteCustomerService();
    }
}

And that’s it! We now have a very simple way to test multiple implementations of any interface by putting in the hard work up front, and reducing the effort of testing new implementations to a single, simple method.

This post is part of a series as explained in I am not a good teacher.

I am not a good teacher

It’s been an interesting few months since I last wrote anything here. Apart from a fairly hectic project in work, my wife is re-training as a software developer and so I’ve been able to find out many ways in which I am deficient as a teacher. The teachers on her course have done a good job in bringing her to the position where she can quite happily code away on server-side code or Android applications, and it’s only when I try to answer one of her questions that I realise how hard it is to teach someone who has some knowledge and not much experience, relatively.

So, I’ve decided to write a series of posts that cover some basic topics but which – crucially – I never see being used out in industry. This is as much a way for me to learn how to talk at the correct level as it is to pass on technical information – hopefully, both parts will be successful. And so to business, with part 1 – A good, lazy way to write tests.

Inferred exceptions in Java

Wow, I can’t believe it’s been 6 weeks since I last blogged. I still need to write up a review on Plumbr, after seeing it in action at Devoxx, but to ease me back into the writing game, here’s a small(-ish) but useful spot of Java.

It’s always nice to borrow and steal concepts and ideas from other languages. Scala’s Option is one idea I really like, so I wrote an implementation in Java. It wraps an object which may or may not be null, and provides some methods to work with in a more kinda-sorta functional way. For example, the isDefined method adds an object-oriented way of checking if a value is null. It is then used in other places, such as the getOrElse method, which basically says “give me what you’re wrapping, or a fallback if it’s null”.

public T getOrElse(T fallback)
{
    return isDefined() ? get() : fallback;
}

In practice, this would replace tradional Java, such as

public void foo()
{
    String s = dao.getValue();
    if (s == null)
    {
        s = "bar";
    }
    System.out.println(s);
}

with the more concise and OO

public void foo()
{
    Option<String> s = dao.getValue();
    System.out.println(s.getOrElse("bar"));
}

However, what if I want to do something other than get a fallback value – say, throw an exception? More to the point, what if I want to throw a specific type of exception – that is, both specific in use and not hard-coded into Option? This requires a spot of cunning, and a splash of type inference.

Because this is Java, we can start with a new factory – ExceptionFactory. This is a basic implementation that only creates exceptions constructed with a message, but you can of course expand the code as required.

public interface ExceptionFactory<E extends Exception>
{
    E create(String message);
}

Notice the <E extends Exception> – this is the key to how this works. Using the factory, we can now add a new method to Option:

public <E extends Exception> T getOrThrow(ExceptionFactory<E> exceptionFactory,
                                          String message) throws E
{
    if (isDefined())
    {
        return get();
    }
    else
    {
        throw exceptionFactory.create(message);
    }
}

Again, notice the throws E – this is inferred from the exception factory.

And that, believe it or not, is 90% of what it takes. The one irritation is the need to have exception factories. If you can stomach this, you’re all set. Let’s define a couple of custom exceptions to see this in action.

public class ExceptionA extends Exception
{
    public ExceptionA(String message)
    {
        super(message);
    }

    public static ExceptionFactory<ExceptionA> factory()
    {
        return new ExceptionFactory<ExceptionA>()
        {
            @Override
            public ExceptionA create(String message)
            {
                return new ExceptionA(message);
            }
        };
    }
}

And the suspiciously similar ExceptionB

public class ExceptionB extends Exception
{
    public ExceptionB(String message)
    {
        super(message);
    }

    public static ExceptionFactory<ExceptionB> factory()
    {
        return new ExceptionFactory<ExceptionB>()
        {
            @Override
            public ExceptionB create(String message)
            {
                return new ExceptionB(message);
            }
        };
    }
}

And finally, throw it all together:

public class GenericExceptionTest
{
    @Test(expected = ExceptionA.class)
    public void exceptionA_throw() throws ExceptionA
    {
        Option.option(null).getOrThrow(ExceptionA.factory(),
                                       "Some message pertinent to the situation");
    }

    @Test
    public void exceptionA_noThrow() throws ExceptionA
    {
        String s = Option.option("foo").getOrThrow(ExceptionA.factory(),
                                                   "Some message pertinent to the situation");
        Assert.assertEquals("foo",
                            s);
    }

    @Test(expected = ExceptionB.class)
    public void exceptionB_throw() throws ExceptionB
    {
        Option.option(null).getOrThrow(ExceptionB.factory(),
                                       "Some message pertinent to the situation");
    }

    @Test
    public void exceptionB_noThrow() throws ExceptionB
    {
        String s = Option.option("foo").getOrThrow(ExceptionB.factory(),
                                                   "Some message pertinent to the situation");
        Assert.assertEquals("foo",
                            s);
    }
}

The important thing to notice, as highlighted in bold above, is the exception declared in the method signature is specific – it’s not a common ancestor (Exception or Throwable). This means you can now use Options in your DAO layer, your service layer, wherever, and throw specific exceptions where and how you need.

Download source: You can get the source code and tests from here – genex

Sidenote
One other interesting thing that came out of writing this was the observation that it’s possible to do this:

public void foo()
{
    throw null;
}

public void bar()
{
    try
    {
        foo();
    }
    catch (NullPointerException e)
    {
        ...
    }
}

It goes without saying that this is not a good idea :)

Devoxx 2012 Swag

Devoxx 2012 has been and gone, and it was a fantastic few days in Antwerpen. I caught up with a lot of great people, made the acquaintance of even more and saw some superb presentations (plus a couple of duds, but there’s an NFC voting system to express this).

I’ve got a lot of material that I need to write up – so much so that I’m getting carpel tunnel syndrome just thinking about it – but in the meantime, here’s a rundown of the things that really count.

This year at Devoxx, some true geeks gave to me

  • 14 assorted t-shirts
  • 12 pens
  • 2 notepads
  • 3 books – Scala in Depth, MongoDB: The Definitive Guide and Infinispan Data Grid Platform
  • 24 badges
  • 2 MongoDB monkeys (or possibly apes)
  • 1 card reader
  • One extremely orange headband
  • 1 frisbee
  • 3 lanyards
  • Numerous chocolate bars and assorted confectionery
  • 1 iPhone cover
  • 3 tins of mints (hopefully not because of my breath)
  • An already distributed and therefore uncountable large stack of stickers
  • Something that may or may not be a gonk
  • 2 yo-yos
  • A block of post-it notes
  • 1 bag
  • 1 hoodie (which I unfortunately lost)
  • 1 pack of agile poker planning cards
  • 1 lens cleaner in a pouch
  • 1 bicycle seat cover
  • 2 three-dimensional puzzles
  • Various refrigerator magnets

Thanks to Ben Verbeken for giving me a lift home after it became clear I couldn’t carry it all home using the train!

Back to Basics – good comments are targeted comments

I can’t think of a single person who enjoys writing comments in code. I don’t, my friends and colleagues don’t, and I’m pretty sure there isn’t a meetup group for fans of it. Outside of code that I write for blog posts, I can pretty much guarantee there are only place where I write comments is in interfaces.

The simple reason for this is that a) interfaces should be explicit, and b) implementations should be self-documenting.

The reason for this blog post is that I’m seeing a lot of code at the moment that falls into one of two traps – either everything is documented, or nothing is documented.

Everything is documented

Spot what’s wrong with this code:

/**
 * The foo class represents blah blah blah...and so on, describing the 
 * class in such detail it's a pity the code couldn't be generated from it
 */
public class Foo {

    /** The name */
    private String name;

   /**
    * Do something with this instance.
    */
   public void doSomething() {
      // Get the name
      String localName = this.getName();

      // munge the local name
      localName.munge();
   }

   /**
    * Get the name.
    * @return the name
    */
   public String getName() {
      // return the name 
      return this.name;
   } 

   /**
    * Set the name.
    * @param name the name
    */
   public void setName(String name) {
      // set the name 
      this.name = name;
   } 
}

Or, to put it another way, spot what’s right with it. That’s a much shorter answer. The code is full of unnecessary comments – e.g. getName() gets the name – and code that seems to have been written just so it could be commented – e.g; String localName – this.getName(); The names have been changed to protect the guilty, but this is real code I’ve seen in a live codebase.

As as I’m concerned, implementations don’t need code-level comments because that’s what the code does anyway.

Nothing is documented

At the other end of the scale is this little gem:

public interface Parser {
    void parse(InputStream is) throws IOException, SQLFeatureNotSupportedException 
}

Interfaces, on the other hand, should have clear documentation that defines what goes in, a generic description of what should happen, and a clear description of what comes out and what exceptions can be thrown. Information at this level should state if, for example, null arguments are allowed, if a null value can be returned, the circumstances in which certain exceptions can be thrown, and so on.

Interfaces should be explicit

Interfaces, to my way of thinking, are contracts, and contracts – as any blood-sucking lawyer can tell you – exist to be honoured. They can’t be honoured if the terms are not explicitly set out.

There are no Burger Kings in Belgium, so if I’m in the UK or the Netherlands I am generally tempted to have one. On my most recent visit, I noticed this at the bottom of the receipt:

Free drink with any adult burger with this receipt Excluding Hamburger, Cheeseburger or King deal or any promotional offers”

Or, to put it another way…

/**
 * Get the free drink promised by the receipt.  This is valid for any burger.
 * @param burger the burger
 * @param receipt the receipt
 * @returns the free drink
 * @throws InvalidBurgerException if the burger doesn't qualify for a free drink
 */
public Drink getFreeDrink(Burger burger,
                          Receipt receipt) throws InvalidBurgerException {
    if (MealType.HAMBURGER == meal.type()
        || MealType.CHEESEBURGER == meal.type()
        || MealType.KING_DEAL == meal.type()
        || meal.isPromo()) {
        throw new InvalidBurgerException();
    }
    return new Drink();
}

To my simple brain, this is confusing and contradictory as hell. Your API should be clear and (as my university teachers beat into me) unambiguous – for example, the words “any” and “except” should not appear in the same sentence. Strive for clarity – if you find your API is too hard to clearly document, there’s a good chance it will be annoying to use. In the case above, an improvement would be something along the lines of

/**
 * Get the free drink promised by the receipt.  Not every burger qualifies for a free drink.
 * @param burger the requested burger.  This may or may not qualify for a free drink
 * @param receipt the receipt containing the offer
 * @returns the free drink. May be null, depending on the burger
 */
public Drink getFreeDrink(Burger burger,
                          Receipt receipt) {
    // implementation
}

Note that I’ve also got rid of the exception as a result of being more explicit.

Conclusion

Annoying as it is, documentation is extremely important in the correct place and completely useless everywhere else. Done correctly, they will make your API easier to use and easier to maintain, which is generally a good thing.

Here endeth the lesson.