My adventures in front-end-land

My adventures in front-end-land or two backbone apps later… Lessons learned.

I have to admit, I’ve been a secret admirer of great front end developers. I thought my brain wasn’t wired for HTML/JS/CSS.
My wife Marianne and I started a SaaS about a year ago aimed at off-loading teachers in schools teaching math for kids aged 6-12 years old. You find the service at http://nomp.se/
The service makes it a breeze to give out personalized assignments and to follow them up. All content is automatically generated, so the teacher doesn’t have to produce the assignments either. The service is free for all kids to use. If you have a subscription you can do the teacher/parental thing.

When you try to run a bootstrapped (self-funded) business you have to roll up your sleeves and just get things done. Often there are things that you haven’t done before. Marianne and I are Java developers by trade. I have a background in Unix sys-admin but haven’t done that professionally in 12 years. Instead I’ve been coding Java, with a break between 2007-2011 when I was Chief Architect for Unibet. It was a very challenging and a learning experience technically, but there is no time for coding when you’re constantly running or governing key transformation projects and trying to be a line manager at the same time.

Some of the jobs that need to be performed in our startup include:

  • Business development / requirements management
  • Back-end developer (Java/Spring)
  • Front-end developer (HTML/CSS/LESS, jQuery/Underscore/Backbone)
  • Quality assurance (testing all the different browsers etc)
  • Systems administrator (Amazon Web Services: EC2/RDS/R53/SES/SQS/Cloudfront, Apache/Tomcat perf tuning, Security patching)
  • Database administrator (Tuning, backup strategy)
  • Configuration management (Maven/Jenkins/RPM-packaging and deploy automation)
  • Sales and marketing (Social media campaigns and online ads)
  • Finance (book keeping, invoicing)
  • Customer service (email support)

As both Marianne and I still have our daytime jobs to manage, there is a lot of late nights… but also a lot of fun!

The first versions of our Math SaaS was a “traditional” server-side rendered application using Spring MVC and Freemarker for templating.
After the initial launch, we spent some time looking at rewriting the most heavily used part – the student quiz – in Javascript in order to improve user experience. We didn’t know much about jQuery, Javascript or Ajax at this point.

The first rewrite did the job. It off-loaded the server and improved the user experience in terms of response times, as eleven server round trips was reduced to two (start the quiz and submit result). However, it was unmaintainable due to a number of things:

  1. No name spacing or modules (all functions in the global space)
  2. Adding a new quiz type involved changing the core quiz logic which meant that stuff broke
  3. The DOM manipulation was spread out all over the javascript code
  4. Client side templating of the quizzes was done with custom code
  5. We had two implementations, one for the desktop and another for mobile (jQuery mobile)
  6. The markup was a mess, without good structure and class names
  7. HTML5 canvas code was too low level to be productive

So after spending some additional months doing other stuff, we increasingly felt that the solution had grown out of hand.
We then spent some time researching the front-end open source space and found some interesting contributions:

Backbone.js

Backbone is a tiny MVP (model-view-presenter) framework for developing client side applications in Javascript.
For better and for worse, its a tiny frame work, only about 7000 bytes to transfer minified/gzipped.

Twitter Bootstrap

Bootstrap is a HTML/CSS framework with standardized markup and ready-made components that can be customizable. It supports responsive design so that the same site can be used for mobile, tablet and desktop.

KineticJS

KineticJS is an HTML5 Canvas JavaScript framework that enables high performance animations, transitions, node nesting, layering, filtering, caching, event handling for desktop and mobile applications, and much more.

We then started to rewrite the quiz engine once again using Bootstrap and it was quite a struggle, being javascript rookies.
Backbone is elegant, but there is a lack of good tutorials beyond the trivial hello world examples. I learned a lot from Christophe Coenraets blog post: http://coenraets.org/blog/2012/05/single-page-crud-application-with-backbone-js-and-twitter-bootstrap/ even though it’s very simple (in retrospect).

We also struggled with things I take for granted in other frameworks, such as a best-practice project structure and naming practices.

Never the less, we shipped a new quiz engine in backbone/bootstrap for Nomp.se in less than a month, which was pretty ok considering our skill level at the time. Was it perfect? No, but it was a huge improvement. Now we can extend the quiz engine with new quizzes quickly in a modular way.

The next evolution was a complete rewrite of the teacher/parent backoffice, which was pretty limited from a functionality point of view and a horrible user-experience to be honest.

This was a considerably bigger effort with twenty odd views and perhaps twice as many use-cases.
In this case we felt it was necessary to use a module system (and a module loader) in order to track dependencies between components.

Require.js does a pretty good job at this, but we felt the documentation is hard to follow. It comes with an optimizer to minify and combine javascript. Not too many people seem to be integrating javascript into their Maven build either, so it takes a while to find a good maven plugin that allows you to run javascript from Maven.

When it comes to data-binding, we started to use Backbone.Forms, but we quickly felt it was overly complicated and not always designed by be extended. I18n wasn’t supported in a good way either.

To minimize duplication, we ended up rolling our own minimal data binding solution that consists of a Backbone model mixin, a view mixin and a few JS helpers.

Here’s how the resulting code looks when using the mixins. The code implements a form with client-side and server side validation. Both types of errors are displayed in the form,

Backbone Model and View

Backbone.Model.extend({
 urlRoot: "/api/profile",
 initialize: function() {
 var mixin = new Util.ErrorHandlingMixin(); // <--- the pixie dust
 _.extend(this, mixin);
 },
 validate: function(attributes, options) {
 var errors = new Array();
 if (this.get('firstName') == null || this.get('firstName').length < 2) {
 errors.push({ attr: 'firstName', error: 'You need to provide at least two characters' });
 }
 if (this.get('lastName') == null || this.get('lastName').length < 2) {
 errors.push({ attr: 'lastName', error: 'You need to provide at least two characters' });
 }
 if (this.get('rewardIcon') == null) {
 errors.push({ attr: 'rewardIcon', error: 'Mandatory field' });
 }
 if (errors.length > 0) {
 return errors;
 } else {
 return null;
 }
 },
 getAllRewardIcons: function() {
 return ["BLUE_STAR", "YELLOW_STAR", "GREEN_STAR", "ORANGE_STAR",
 "BLUE_HEART", "YELLOW_HEART", "GREEN_HEART", "ORANGE_HEART",
 "BLUE_SMILEY", "YELLOW_SMILEY", "GREEN_SMILEY", "ORANGE_SMILEY",
 "BLUE_CANDY", "YELLOW_CANDY", "GREEN_CANDY", "ORANGE_CANDY"];
 },
 getAllLocales: function() {
 return["en_GB", "sv_SE"];
 }
 });
 BaseView.extend({
 initialize: function () {
 var mixin = new Util.ErrorHandlingViewMixin(); // <--- the pixie dust
 _.extend(this, mixin);
 this.model.on("invalid", this._showErrors, this);
 },
 events: {
 "click .save-form": "saveForm",
 "focus input": "validateForm"
 },
 template: _.template(tpl),
 render: function () {
 $(this.el).html(this.template(_.defaults(this.model.toJSON(),
 {rewardIcons: this.model.getAllRewardIcons(), locales: this.model.getAllLocales()})));
 return this;
 }
 });
 });

Markup with helper functions

<div>
 <form class="form-horizontal">
 <fieldset>
 <legend><%= i18n.t('Personal information') %></legend>
 <div class="control-group"><label class="control-label"><%= i18n.t('First name') %></label>
 <div class="controls"><input name="firstName" type="text" value="<%- firstName %>">
 <div class="help-inline"></div>
 </div>
 </div>
 <div class="control-group"><label class="control-label"><%= i18n.t('Last name') %></label>
 <div class="controls"><input name="lastName" type="text" value="<%- lastName %>">
 <div class="help-inline"></div>
 </div>
 </div>
 <div class="control-group"><label class="control-label"><%= i18n.t('Description') %></label>
 <div class="controls"><input name="description" type="text" value="<%- description %>">
 <div class="help-inline"></div>
 <div class="help-block"><%= i18n.t('The description is shown to students that would like to become your mentee') %></div>
 </div>
 </div>
 </fieldset>
 <fieldset>
 <legend><%= i18n.t('Settings') %></legend>
 <div class="control-group"><label class="control-label"><%= i18n.t('Language') %></label>
 <div class="controls">
 <%= forms.select("locale", locale, "Locale", locales) %>
 </div>
 <div class="help-inline"></div>
 </div>
 <div class="control-group">
 <div class="controls">
 <%= forms.checkbox("subscribedToNewsletters", subscribedToNewsletters, i18n.t('Send me news about Nomp (about two times per month)')) %>
 </div>
 </div>
 <div class="control-group">
 <div class="controls">
 <%= forms.checkbox("subscribedToNotifications", subscribedToNotifications, i18n.t('Send me quest notification emails')) %>
 </div>
 </div>
 <div class="control-group"><label class="control-label"><%= i18n.t('Reward symbol') %></label>
 <div class="controls">
 <%= forms.select("rewardIcon", rewardIcon, "RewardIcon", rewardIcons) %>
 <span><img class="reward-icon" src="/static/img/rewardicon/<%=rewardIcon%>.png"></span>
 <div class="help-block"><%= i18n.t('The reward symbol is used in quests that you give out.') %>
 </div>
 </div>
 </div>
 </fieldset>
 <div class="form-actions">
 <button class="btn btn-primary save-form"><%= i18n.t('Save') %></button> <span class="server-error error"/>
 </div>
 </form>
 </div>

We’d love your feedback on the binding issue. If people find it useful, we’ll probably open source the small data-binding framework when we’ve used it a bit more. We think it minimizes boiler-plate code and avoids duplication without violating Backbone principles such as using Model.validate() for validation.

Let us know what you think!

Advertisements

An attempt at a developer friendly build pipe line

Background

I’ve spent some evenings/nights over the christmas holiday improving the deployment of Nomp.se, a site where kids can practice math for free, that we run on EC2.

The situation we had was that we deployed to the EC2 server using a locally installed Jenkins CI-server, which built the artifact (a WAR) and used the maven tomcat plugin to deploy to the local tomcat server, which was a rpm package provided by Amazon (yum install tomcat6). The setup worked pretty ok, but it was a hack. Database changes were applied and tested manually – we had a folder “sql” that contained numbered sql files that should be applied in order.

Clearly a lot of room for improvement in this area!

Goals with the new build pipeline

I wanted to reach the following goals with the new build pipeline:

  • One build from the build server all the way from my local Jenkins through test environments and into production.
  • 100% control over configuration changes of all components (Apache httpd, Apache Tomcat, MySql database), so that changes can be tested in the normal pipeline without relying on manual hacks.
  • It should be developer friendly. A developer with basic understanding of Linux, maven and tomcat should be able to make changes to and work with the build pipe line.
  • Hence, it should only rely on basic tooling (ant, maven, rpm packages) for doing the heavy lifting and use capabilities of other tools, eg Jenkins, Puppet, Capistrano as (non-critical) value add.

After a few iterations I was able to get to the following to deploy any configuration change on to a production server.

on the build server:
 $ mvn deploy

on the target server:
 # yum -y update nomp-web nomp-tomcat nomp-dbdeploy
 # cd /opt/nomp-dbdeploy; ant
 # /etc/init.d/nomp restart

That’s it. Four steps. There are no shell scripts involved. There is no rsync, there is no scp:ing of files. How I did it? Hold on, I will come to that in a minute or two 🙂

System configuration and prerequisites

In order to make sure the server contains the prerequisite packages and configuration I used Puppet.

“Puppet is a declarative language for expressing system configuration, a client and server for distributing it, and a library for realizing the configuration.

Rather than approaching server management by automating current techniques, Puppet reframes the problem by providing a language to express the relationships between servers, the services they provide, and the primitive objects that compose those services. Rather than handling the detail of how to achieve a certain configuration or provide a given service, Puppet users can simply express their desired configuration using the abstractions they’re used to handling, like service and node, and Puppet is responsible for either achieving the configuration or providing the user enough information to fix any encountered problems.”

from http://projects.puppetlabs.com/projects/puppet/wiki/Big_Picture

I’m not going to go into detail how to setup Puppet in this text, but here’s what I do with Puppet in order to support the build pipe line:

  • Ensure that the service accounts and groups exists on the target system
  • Ensure that software I rely on is installed (ant, apache httpd, mysqld)
  • Configuration management of a few configuration files such as httpd.conf, my.cnf etc.

Puppet config file example:

user { nomp:
 ensure => present,
 uid => 300,
 gid => 300,
 shell => '/bin/bash',
 home => '/opt/nomp',
 managehome => true,
 }
group { nomp:
 ensure => 'present',
 gid => 300
 }
package { "ant":
 ensure => "installed"
 }

The above configuration means that Puppet will ensure that the user nomp and group nomp will exist on the system and that the ant package will be installed.
I will do a whole lot more work with configuration management and provisioning with Puppet going forward, but the above is what was needed to meet my project goals.

Getting started

I started with trying package my existing WAR project as an rpm (or .deb). When Googling about for a while I found the RPM Maven Plugin (http://mojo.codehaus.org/rpm-maven-plugin/). It basically lets you build rpms using maven. The downside is that is relies on the “rpm” command installed in order to produce the final RPM from the spec file. In order to get a working maven environment on all platforms, I wrapped the rpm plugin in a maven build profile.

(Later I also found a pure java rpm tool (redline-rpm), but I haven’t looked into it yet).

The trickiest part was to get a good setup for artifact versions and RPM-release versions so that the maven release plugin could still be used without any manual changes.
The rpm-plugin has some funky defaults (http://mojo.codehaus.org/rpm-maven-plugin/ident-params.html#release) that wasn’t going to work with “yum update”.
It was a lot of experimentation, but in the end I settled for the Build Number Maven Plugin (http://mojo.codehaus.org/buildnumber-maven-plugin/).
It’s a pretty simple plugin that checks the SCM for the revision number and exposes that as a maven variable.

Here’s the RPM-part of my WAR POM:

<profiles>
  <profile>
    <id>rpm</id>
    <activation>
      <os>
        <name>linux</name>
      </os>
    </activation>
    <build>
      <plugins>
        <plugin>
          <groupId>org.codehaus.mojo</groupId>
          <artifactId>rpm-maven-plugin</artifactId>
          <version>2.1-alpha-1</version>
          <extensions>true</extensions>
          <executions>
            <execution>
              <goals>
                <goal>attached-rpm</goal>
              </goals>
            </execution>
          </executions>
          <configuration>
            <copyright>Copyright 2011 Selessia AB</copyright>
            <distribution>Nomp</distribution>
            <group>${project.groupId}</group>
            <packager>${user.name}</packager>
            <!-- need to use the build number plugin here in order for yum upgrade to work in snapshots -->
            <release>${buildNumber}</release>
            <defaultDirmode>555</defaultDirmode>
            <defaultFilemode>444</defaultFilemode>
            <defaultUsername>nomp</defaultUsername>
            <defaultGroupname>nomp</defaultGroupname>
            <requires>
              <require>nomp-tomcat</require>
            </requires>
            <mappings>
              <!-- webapps deployment -->
              <mapping>
                <directory>${rpm.install.webapps}/${project.artifactId}</directory>
                <sources>
                  <source>
                    <location>target/${project.artifactId}-${project.version}</location>
                  </source>
                </sources>
              </mapping>
            </mappings>
          </configuration>
        </plugin>
      </plugins>
    </build>
  </profile>
</profiles>

Here’s the build number plugin configuration:

<plugin>
    <groupId>org.codehaus.mojo</groupId>
    <artifactId>buildnumber-maven-plugin</artifactId>
    <version>1.0</version>
    <executions>
      <execution>
        <phase>validate</phase>
        <goals>
          <goal>create</goal>
        </goals>
      </execution>
    </executions>
    <configuration>
      <doCheck>true</doCheck>
      <doUpdate>true</doUpdate>
    </configuration>
  </plugin>

What all the configuration above does is that it adds a secondary artifact (the rpm) which gets uploaded to the Nexus maven repository on “mvn deploy”.

I don’t really need the WAR-file anymore, as I pack the RPM exploded. I might change the primary artifact type from WAR to RPM in the future, but I haven’t looked into that yet.

Packaging the app server as an RPM

The next thing I did was that I wanted to package the app server as an RPM as well. I feel it’s more developer friendly to build a tomcat rpm using maven as well rather that just grabbing some arbitrary rpm and using Puppet to fix the configuration. Also, we get full control over where it is installed and where log are.

One thing I really wanted to avoid was to having to check in the Tomcat distribution tar ball into Subversion. I hate blobs in SVN, so I was pleasantly surprised to learn that Nexus handles any types of files. I simply uploaded the latest tomcat distro tar (apache-tomcat-7.0.23.tar.gz) into my Nexus 3rd party repository.

I created a sibling project “tomcat” with a pom that looks like this:

<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
  <!-- avoid rpm here as classifier will differ and Nexus search will fail -->
  <packaging>pom</packaging>
  <modelVersion>4.0.0</modelVersion>
  <parent>
    <artifactId>nomp-parent</artifactId>
    <groupId>se.nomp</groupId>
    <version>2.1.0-SNAPSHOT</version>
  </parent>
  <artifactId>nomp-tomcat</artifactId>
  <version>0.0.1-SNAPSHOT</version>
  <name>Nomp Tomcat Server</name>
  <description>Tomcat server for Nomp</description>
  <properties>
    <tomcat.version>7.0.23</tomcat.version>
    <tomcat.build.dir>${project.build.directory}/tomcat/apache-tomcat-${tomcat.version}</tomcat.build.dir>
    <rpm.install.basedir>/opt/nomp</rpm.install.basedir>
    <rpm.install.logdir>/var/log/nomp</rpm.install.logdir>
  </properties>
  <profiles>
    <!-- Only run the RPM packaging on Linux as we need to rpm binary to build rpms using the rpm plugin -->
    <profile>
      <id>rpm</id>
      <activation>
        <os>
          <name>linux</name>
        </os>
      </activation>
      <build>
        <plugins>
          <plugin>
            <groupId>org.codehaus.mojo</groupId>
            <artifactId>rpm-maven-plugin</artifactId>
            <version>2.1-alpha-1</version>
            <extensions>true</extensions>
            <executions>
              <execution>
                <goals>
                  <goal>attached-rpm</goal>
                </goals>
              </execution>
            </executions>
            <configuration>
              <copyright>Copyright 2011 Selessia AB</copyright>
              <distribution>Nomp</distribution>
              <group>${project.groupId}</group>
              <packager>${user.name}</packager>
              <!-- need to use the build number plugin here in order for yum upgrade to work in snapshots -->
              <release>${buildNumber}</release>
              <defaultDirmode>755</defaultDirmode>
              <defaultFilemode>444</defaultFilemode>
              <defaultUsername>root</defaultUsername>
              <defaultGroupname>root</defaultGroupname>
              <mappings>
                <mapping>
                  <directory>${rpm.install.basedir}/logs</directory>
                  <sources>
                    <softlinkSource>
                      <location>${rpm.install.logdir}</location>
                    </softlinkSource>
                  </sources>
                </mapping>
                <mapping>
                  <directory>${rpm.install.logdir}</directory>
                  <username>nomp</username>
                  <groupname>nomp</groupname>
                </mapping>
                <mapping>
                  <directory>${rpm.install.basedir}/bin</directory>
                  <filemode>555</filemode>
                  <sources>
                    <source>
                      <location>${tomcat.build.dir}/bin</location>
                    </source>
                  </sources>
                </mapping>
                <mapping>
                  <directory>${rpm.install.basedir}/conf</directory>
                  <sources>
                    <source>
                      <location>${tomcat.build.dir}/conf</location>
                    </source>
                  </sources>
                </mapping>
                <mapping>
                  <directory>${rpm.install.basedir}/lib</directory>
                  <sources>
                    <source>
                      <location>${tomcat.build.dir}/lib</location>
                    </source>
                  </sources>
                </mapping>
                <mapping>
                  <directory>${rpm.install.basedir}/work</directory>
                  <username>nomp</username>
                  <groupname>nomp</groupname>
                </mapping>
                <mapping>
                  <directory>${rpm.install.basedir}/temp</directory>
                  <username>nomp</username>
                  <groupname>nomp</groupname>
                </mapping>
                <mapping>
                  <directory>${rpm.install.basedir}/conf/Catalina</directory>
                  <username>nomp</username>
                  <groupname>nomp</groupname>
                </mapping>
                <mapping>
                  <directory>/etc/init.d</directory>
                  <directoryIncluded>false</directoryIncluded>
                  <filemode>555</filemode>
                  <sources>
                    <source>
                      <location>src/main/etc/init.d</location>
                    </source>
                  </sources>
                </mapping>
              </mappings>
            </configuration>
          </plugin>
        </plugins>
      </build>
    </profile>
  </profiles>

  <build>
    <resources>
      <resource>
         <!-- overlay the contents in the resources src dir ontop of the unpacked tomcat -->
         <directory>src/main/resources</directory>
         <filtering>false</filtering>
       </resource>
     </resources>
    <plugins>
      <plugin>
        <groupId>org.codehaus.mojo</groupId>
        <artifactId>buildnumber-maven-plugin</artifactId>
        <version>1.0</version>
        <executions>
          <execution>
            <phase>validate</phase>
            <goals>
              <goal>create</goal>
            </goals>
          </execution>
        </executions>
        <configuration>
          <doCheck>true</doCheck>
          <doUpdate>true</doUpdate>
        </configuration>
      </plugin>
      <plugin>
        <artifactId>maven-clean-plugin</artifactId>
        <version>2.4.1</version>
        <executions>
          <execution>
            <id>auto-clean</id>
            <phase>initialize</phase>
            <goals>
              <goal>clean</goal>
            </goals>
          </execution>
        </executions>
      </plugin>
      <plugin>
        <groupId>org.apache.maven.plugins</groupId>
        <artifactId>maven-resources-plugin</artifactId>
        <version>2.5</version>
        <executions>
          <execution>
            <id>resources</id>
            <!-- need to specify, as this is not default for pom packaging -->
            <phase>process-resources</phase>
            <goals>
              <goal>resources</goal>
            </goals>
            <configuration>
              <encoding>UTF-8</encoding>
              <outputDirectory>${tomcat.build.dir}</outputDirectory>
            </configuration>
          </execution>
        </executions>
      </plugin>
      <plugin>
        <groupId>org.apache.maven.plugins</groupId>
        <artifactId>maven-dependency-plugin</artifactId>
        <version>2.4</version>
        <executions>
          <execution>
            <id>unpack-tomcat</id>
            <phase>generate-resources</phase>
            <goals>
              <!-- unpack the tomcat dependency that's been downloaded from your local 3rd party repo -->
              <goal>unpack-dependencies</goal>
            </goals>
            <configuration>
              <outputDirectory>${project.build.directory}/tomcat</outputDirectory>
            </configuration>
          </execution>
        </executions>
      </plugin>
    </plugins>
  </build>
  <dependencies>
    <!-- the tomcat distro that's been uploaded to the local third party maven repo -->
    <dependency>
       <groupId>org.apache.tomcat</groupId>
       <artifactId>apache-tomcat</artifactId>
       <version>${tomcat.version}</version>
       <type>tar.gz</type>
     </dependency>
   </dependencies>
</project>

Note that the Tomcat artifact is just a normal maven dependency. I used the maven-dependency-plugin to automatically unpack the archive.
I then overlay the configuration files I want to change with the well known maven-resources-plugin.

Okay. Now I was pretty happy. I was building two good RPM:s with proper version and release numbers that were deployed to my Nexus on “mvn deploy”.

Distributing the packages

The next step was then to export these files into a yum repository. Or so I thought…
I was pleasantly surprised, or more like super-excited when I realized that some awesome folks had made a plugin for Nexus (nexus-yum-plugin http://code.google.com/p/nexus-yum-plugin/) that exposes a Nexus Maven repo as a yum repo!

If you have yum installed, just add a repository configuration to your target server (I use Puppet to automate this).

Here’s how it looks:

root@manny:/etc/yum/repos.d# cat nexus-snapshot.repo
 [nexus-snapshots]
 name=Nomp Nexus - Snapshots
 baseurl=http://manny:8082/nexus/content/repositories/snapshots/
 enabled=1
 gpgcheck=0

You need to add one config for your snapshot repo and another for your release repo.
Test your setup with “yum list” (you need to redeploy at least one RPM artifact in each repo in order for the yum-plugin to create the RPM-repo).

root@manny:/etc/yum/repos.d# yum list
Installed Packages
 nomp-dbdeploy.noarch 0.0.2-1788 @maven-snapshots
 nomp-tomcat.noarch 0.0.1-1788 @maven-snapshots
 nomp-web.noarch 2.1.0-1788 @maven-snapshots
Available Packages
 nomp-dbdeploy.noarch 0.0.2-1793 maven-snapshots
 nomp-tomcat.noarch 0.0.1-1793 maven-snapshots
 nomp-web.noarch 2.1.0-1793 maven-snapshots

In order to transfer the RPM packages and install the software, you just type:

# yum -y install nomp-web

or if already installed:

# yum -y update nomp-web nomp-tomcat

Pretty sweet! It’s so easy for anyone to find out what is installed/deployed on a server using rpm packages!

The database is code too

In order to ensure that database scripts are tested throughout the deploy pipeline, we also need to treat our database scripts as code that should be run in each environment.
I like to use dbdeploy (http://code.google.com/p/dbdeploy/) for database patch script packaging. Dbdeploy is a simple Database Change Management tool that applies SQL files in a specified order.  It can be run from the command line or from ant. It has a Maven plugin as well, but I don’t want to use that as I don’t want maven installed on the production servers.

I ended up making a separate rpm with the sql change scripts for the application and packaged the maven dependencies with the rpm. The main application is a build.xml script for nomp.

The build.xml I use for the dbdeploy package looks like this:

<project name="MyProject" default="dbdeploy" basedir=".">
    <description>dbdeploy script for nomp</description>
    <record name="dbdeploy.log" loglevel="verbose" action="start" />
    <path id="dbdeploy.classpath" >
        <fileset dir="lib">
            <include name="*.jar" />
        </fileset>
    </path>

    <taskdef name="dbdeploy" classname="com.dbdeploy.AntTarget" classpathref="dbdeploy.classpath" />

    <target name="dbdeploy" depends="create-log-table">
        <dbdeploy driver="${jdbc.driverClassName}" url="${jdbc.url}" userid="${jdbc.username}" password="${jdbc.password}" dir="sql" />
    </target>

    <target name="create-log-table">
        <sql classpathref="dbdeploy.classpath" driver="${jdbc.driverClassName}" url="${jdbc.url}" userid="${jdbc.username}" password="${jdbc.password}" src="ddl/createSchemaVersionTable.mysql.sql" />
    </target>
</project>

I also improved the dbdeploy distribution mysql script a bit so that it wont fail if it’s run again and again:

CREATE TABLE IF NOT EXISTS changelog (
 change_number BIGINT NOT NULL,
 complete_dt TIMESTAMP NOT NULL,
 applied_by VARCHAR(100) NOT NULL,
 description VARCHAR(500) NOT NULL,
 CONSTRAINT Pkchangelog PRIMARY KEY (change_number)
 );

When the RPM is installed, one only runs “ant” to run the needed sql change sets.

root@manny:/opt/nomp-dbdeploy# ant
 Buildfile: /opt/nomp-dbdeploy/build.xml
create-log-table:
 [sql] Executing resource: /opt/nomp-dbdeploy/ddl/createSchemaVersionTable.mysql.sql
 [sql] 1 of 1 SQL statements executed successfully
dbdeploy:
 [dbdeploy] dbdeploy 3.0M3
 [dbdeploy] Reading change scripts from directory /opt/nomp-dbdeploy/sql...
 [dbdeploy] Changes currently applied to database:
 [dbdeploy] 1, 2
 [dbdeploy] Scripts available:
 [dbdeploy] 1, 2
 [dbdeploy] To be applied:
 [dbdeploy] (none)
BUILD SUCCESSFUL
 Total time: 0 seconds

Final step – setting up Jenkins

I will assume that the reader knows how to setup and configure Jenkins jobs. I did a vanilla Jenkins install, and added the build pipeline plugin (https://wiki.jenkins-ci.org/display/JENKINS/Build+Pipeline+Plugin) for a nice gui and the manual triggers.

My pipeline

The pipeline runs automatically for each check in.

Job #1 – “Nomp build”

Builds the root pom with goal “deploy”. Note: add flags for -Dusername -Dpassword for svn credentials as the build-number-plugin is used)

Job #2 – “Nomp deploy to test”

ssh jenkins@test-server "yum -y update nomp-web nomp-tomcat nomp-dbdeploy;
cd /opt/nomp-dbdeploy; ant; /etc/init.d/nomp restart"

note: you need to add jenkins to sudoers (using the NOPASSWD option) on the target and use ssh key auth of course (Puppet does this for me).

Job #3 – “Nomp deploy to production” (manual trigger)

A manual step after smoke tests have been run (not automated for Nomp yet), to release to production. Exactly like the above, except different target server.

Next steps

For Nomp, the next step will be more Puppet config. I want to be able to build and start up a fully working web server and db server from a standard EC2 AMI without any manual steps. This isn’t hard, but I can’t find the time right now. Need to add new features to the customers too 🙂 After that, I’d love to look at using Capistrano (https://github.com/capistrano/capistrano/wiki) for deploy automation to many hosts. Currently Nomp only has a few servers, so ssh from Jenkins works fine for now.

Thank you for reading all the way to here. I’d love feedback if you think this is useful or not and if you agree on it being “developer friendly”. I have a pretty solid background in *nix admin, but I think most developers will understand and be able to maintain this setup, as compared to a solution more focused on using a sysadmin’s toolbox.

Lastly, please contribute with improvements if you find any.

I’ll try find time and energy to clean up the pom:s and provide a skeleton project that has a simple war, a tomcat and the dbdeploy rpm config for download in a week or so.

Added: Here’s an overview of the current continuous deployment environment at Nomp.se

Nomp Continous Deployment architecture

(click for full size image)

Freemarker, slf4j and spring

I’ve just spent three hours trying to get Freemarker to stop spitting out “DEBUG cache:81” messages in my Spring application.

Freemarker recently hacked in SLF4J support into 2.3, but I had a hard time finding out how to enable it, so I reckoned I’d share my experiences.

FreeMarker 2.3 looks for logging libraries in this order (by default) with the class-loader of the FreeMarker classes: Log4J, Avalon, java.util.logging. The first that it founds in this list will be the one used for logging.

I found out that you can override this behavior in 2.3.18 by calling:

freemarker.log.Logger.
    selectLoggerLibrary(freemarker.log.Logger.LIBRARY_SLF4J);

However, this code need to run before any Freemarker classes are initialized.

After trying a few different tricks, such as having a load-on-startup Servlet’s init() configure the logger, I ended up with a fairly clean solution.

I extended Spring’s FreeMarkerConfigurer class like this:

 public class PluxFreeMarkerConfigurer extends FreeMarkerConfigurer {
    private Logger logger = LoggerFactory
            .getLogger(PluxFreeMarkerConfigurer.class);

    @Override
    public void afterPropertiesSet() throws IOException, TemplateException {
        fixFreemarkerLogging();
        super.afterPropertiesSet();
    }

    private void fixFreemarkerLogging() {
        try {
            freemarker.log.Logger
              .selectLoggerLibrary(freemarker.log.Logger.LIBRARY_SLF4J);
            logger.info("Switched broken Freemarker logging to slf4j");
        } catch (ClassNotFoundException e) {
            logger.warn("Failed to switch broken Freemarker logging to slf4j");
        }
    }
}

and changed my Spring-config to use my class to initialize Freemarker instead:

  <!-- FreeMarker engine that configures Freemarker for SLF4J-->
  <bean id="freemarkerConfig" class="com.selessia.plux.web.PluxFreeMarkerConfigurer"
 ...
 </bean>

Hope this helps someone.

Have you walked down the ORM road of death?

A friend of mine asked me a really good question tonight:

Hey Stefan,
It would be great if you could please give me a sense for how many development teams get hit by a database bottleneck in JEE / Java / 3-tier / ORM / JPA land? And, how they go about addressing it? What exactly causes their bottleneck?

I think most successful apps – scaling problems are hopefully a sign that people are actually using the stuff, right? – built with Hibernate/JPA hit db contention pretty early on. From what I’ve seen this is usually caused by doing excessive round-trips over the wire or returning too large data sets.

And then we spend time fixing all the obvious broken data access patterns, by first to use HQL over standard eager/lazy fetching, or tuning existing HQL and then direct SQL if needed.

I believe the next step after this is typically to try to scale vertically, both in the db and app tier. Throwing more hardware at the problem may get us quite a bit further at this point.

Then we might get to the point where the app gets fixed so that it actually makes sense to scale horizontally in the app tier. We will probably have to add a load balancer to the mix and use sticky sessions by now.

And then then we will perhaps find out that we will not do that very well without a distributed 2nd level cache, and that all our direct SQL code writing to the DB (that bypass the 2nd level cache) won’t allow us to use a 2nd level cache for reads either…

Here is where I think there are many options and I’m not sure how people tend go from here. Here we might see some people abandoning ORM, while others may try to get the 2nd level cache to work?

Are these the typical steps for scaling up a Java Hibernate/JPA app? What’s your experience?

The (almost) perfect (rich) website

I am personally a fan of light-weight web pages that use W3C standards based elements and layout. However, many commercial web sites seem to want to move to a more “print-like” experience.

The cost of moving to a richer experience is usually higher maintainance cost and round trip time – you need the graphics or flash guys for many changes. SEO (Search Engine Optimization) suffers as the graphics can’t be indexed by the web crawlers, and you usually take a hit on page load times too.

Wouldn’t it be great if you could make a web site that is:

  • Great looking
  • SEO friendly
  • Quick to load and render
  • and is XHTML compliant

We have come a long way at unibet.com, but we made some compromise in look and feel for speed and we also do still have article headers using generated images. This has bothered me for some time. One of our consultant mentioned that he know of someone that used Flash for rendering headlines, and it sounded like a good idea to me. I did some research and stumbled upon sIFR.

sIFR (or Scalable Inman Flash Replacement) is a technology that allows you to replace text elements on screen with Flash equivalents. Put simply, sIFR allows website headings, pull-quotes and other elements to be styled in whatever font the designer chooses – be that Foundry Monoline, Gill Sans, Impact, Frutiger or any other font – without the user having it installed on their machine. sIFR provides some javascript files and a Flash movie in source code format (.fla) that you can embed your fonts into. It’s really easy to set up.

To use sIFR on your website you embed the font (be careful to encode all (but only) the chars you will need) to minimize the size of the Flash movie. Typically the SWF movie is between 8-70kB. This may seem like a lot more than an image, but remember that the SWF will be cached for a very long time in to browser if you’ve set up your web server correctly. Effectively the font flash will only be downloaded once or not at all per site visit.

When you have made the SWF:s you need, just add a few lines of sIFR code into the web page and that’s it.

The following explains the sIFR process in the browser:

  1. A web page is requested and loaded by the browser.
  2. Javascript detects if Flash 6 or greater is installed.
  3. If no Flash is detected, the page is drawn as normal.
  4. If Flash is detected, the HTML element of the page is immediately given the class “hasFlash”. This effectively hides all text areas to be replaced but keeps their bounds intact. The text is hidden because of a style in the style sheet which only applies to elements that are children of the html.hasFlash element.
  5. The javascript traverses through the DOM and finds all elements to be replaced. Once found, the script measures the offsetWidth and offsetHeight of the element and replaces it with a Flash movie of the same dimensions.
  6. The Flash movie, knowing its textual content, creates a dynamic text field and render the text at a very large size (96pt).
  7. The Flash movie reduces the point size of the text until it all fits within the overall size of the movie.

sIFR is a clever hack, but none the less a hack. The result is really amazing however. It’s hardly noticeable to the end user and meets all the four requirements I set up in my “what if…” list above so we’re moving to sIFR for the next release of unibet.com.

While sIFR gives us better typography today, it is clearly not the solution for the next 20 years.

Further reading:

The sIFR3 Wiki

Would you want to live in your code?

I’ve had many discussions with developers on if their code is considered to be “good”, “maintainable” or “effective”. Starting today I’ll ask them if their code is habitable.

Any (building) architect would likely want to live in a house that they found had good architecture. It should be solid, functional and effective. The last thing you want is to live in a maze of abstract rooms or to open five doors to get into the kitchen.

To me great code has the following qualities:

  1. It solves the business problem, and only that.
  2. It maximizes communication. This is really key. Your code should be easy to parse and understand for humans. People are not telepathic so please write code that people can read without having to solve a puzzle or slide into a hell of recursive roller coasters. Is writing documentation often boring? Yes. Does great code need much documentation? No.
  3. It minimizes “accidental” complexity introduced by crappy frameworks (remember EJB 2.1?) or old code patterns. Keep it simple. Focus on the business problem.
  4. Great code does not take into consideration what might happen at a future date. No one can see into the future, so please just don’t try to as you are adding complexity and impacting time to market and project budget.

A great developer understands that business value provided per dollar spent is what counts.