My adventures in front-end-land

My adventures in front-end-land or two backbone apps later… Lessons learned.

I have to admit, I’ve been a secret admirer of great front end developers. I thought my brain wasn’t wired for HTML/JS/CSS.
My wife Marianne and I started a SaaS about a year ago aimed at off-loading teachers in schools teaching math for kids aged 6-12 years old. You find the service at http://nomp.se/
The service makes it a breeze to give out personalized assignments and to follow them up. All content is automatically generated, so the teacher doesn’t have to produce the assignments either. The service is free for all kids to use. If you have a subscription you can do the teacher/parental thing.

When you try to run a bootstrapped (self-funded) business you have to roll up your sleeves and just get things done. Often there are things that you haven’t done before. Marianne and I are Java developers by trade. I have a background in Unix sys-admin but haven’t done that professionally in 12 years. Instead I’ve been coding Java, with a break between 2007-2011 when I was Chief Architect for Unibet. It was a very challenging and a learning experience technically, but there is no time for coding when you’re constantly running or governing key transformation projects and trying to be a line manager at the same time.

Some of the jobs that need to be performed in our startup include:

  • Business development / requirements management
  • Back-end developer (Java/Spring)
  • Front-end developer (HTML/CSS/LESS, jQuery/Underscore/Backbone)
  • Quality assurance (testing all the different browsers etc)
  • Systems administrator (Amazon Web Services: EC2/RDS/R53/SES/SQS/Cloudfront, Apache/Tomcat perf tuning, Security patching)
  • Database administrator (Tuning, backup strategy)
  • Configuration management (Maven/Jenkins/RPM-packaging and deploy automation)
  • Sales and marketing (Social media campaigns and online ads)
  • Finance (book keeping, invoicing)
  • Customer service (email support)

As both Marianne and I still have our daytime jobs to manage, there is a lot of late nights… but also a lot of fun!

The first versions of our Math SaaS was a “traditional” server-side rendered application using Spring MVC and Freemarker for templating.
After the initial launch, we spent some time looking at rewriting the most heavily used part – the student quiz – in Javascript in order to improve user experience. We didn’t know much about jQuery, Javascript or Ajax at this point.

The first rewrite did the job. It off-loaded the server and improved the user experience in terms of response times, as eleven server round trips was reduced to two (start the quiz and submit result). However, it was unmaintainable due to a number of things:

  1. No name spacing or modules (all functions in the global space)
  2. Adding a new quiz type involved changing the core quiz logic which meant that stuff broke
  3. The DOM manipulation was spread out all over the javascript code
  4. Client side templating of the quizzes was done with custom code
  5. We had two implementations, one for the desktop and another for mobile (jQuery mobile)
  6. The markup was a mess, without good structure and class names
  7. HTML5 canvas code was too low level to be productive

So after spending some additional months doing other stuff, we increasingly felt that the solution had grown out of hand.
We then spent some time researching the front-end open source space and found some interesting contributions:

Backbone.js

Backbone is a tiny MVP (model-view-presenter) framework for developing client side applications in Javascript.
For better and for worse, its a tiny frame work, only about 7000 bytes to transfer minified/gzipped.

Twitter Bootstrap

Bootstrap is a HTML/CSS framework with standardized markup and ready-made components that can be customizable. It supports responsive design so that the same site can be used for mobile, tablet and desktop.

KineticJS

KineticJS is an HTML5 Canvas JavaScript framework that enables high performance animations, transitions, node nesting, layering, filtering, caching, event handling for desktop and mobile applications, and much more.

We then started to rewrite the quiz engine once again using Bootstrap and it was quite a struggle, being javascript rookies.
Backbone is elegant, but there is a lack of good tutorials beyond the trivial hello world examples. I learned a lot from Christophe Coenraets blog post: http://coenraets.org/blog/2012/05/single-page-crud-application-with-backbone-js-and-twitter-bootstrap/ even though it’s very simple (in retrospect).

We also struggled with things I take for granted in other frameworks, such as a best-practice project structure and naming practices.

Never the less, we shipped a new quiz engine in backbone/bootstrap for Nomp.se in less than a month, which was pretty ok considering our skill level at the time. Was it perfect? No, but it was a huge improvement. Now we can extend the quiz engine with new quizzes quickly in a modular way.

The next evolution was a complete rewrite of the teacher/parent backoffice, which was pretty limited from a functionality point of view and a horrible user-experience to be honest.

This was a considerably bigger effort with twenty odd views and perhaps twice as many use-cases.
In this case we felt it was necessary to use a module system (and a module loader) in order to track dependencies between components.

Require.js does a pretty good job at this, but we felt the documentation is hard to follow. It comes with an optimizer to minify and combine javascript. Not too many people seem to be integrating javascript into their Maven build either, so it takes a while to find a good maven plugin that allows you to run javascript from Maven.

When it comes to data-binding, we started to use Backbone.Forms, but we quickly felt it was overly complicated and not always designed by be extended. I18n wasn’t supported in a good way either.

To minimize duplication, we ended up rolling our own minimal data binding solution that consists of a Backbone model mixin, a view mixin and a few JS helpers.

Here’s how the resulting code looks when using the mixins. The code implements a form with client-side and server side validation. Both types of errors are displayed in the form,

Backbone Model and View

Backbone.Model.extend({
 urlRoot: "/api/profile",
 initialize: function() {
 var mixin = new Util.ErrorHandlingMixin(); // <--- the pixie dust
 _.extend(this, mixin);
 },
 validate: function(attributes, options) {
 var errors = new Array();
 if (this.get('firstName') == null || this.get('firstName').length < 2) {
 errors.push({ attr: 'firstName', error: 'You need to provide at least two characters' });
 }
 if (this.get('lastName') == null || this.get('lastName').length < 2) {
 errors.push({ attr: 'lastName', error: 'You need to provide at least two characters' });
 }
 if (this.get('rewardIcon') == null) {
 errors.push({ attr: 'rewardIcon', error: 'Mandatory field' });
 }
 if (errors.length > 0) {
 return errors;
 } else {
 return null;
 }
 },
 getAllRewardIcons: function() {
 return ["BLUE_STAR", "YELLOW_STAR", "GREEN_STAR", "ORANGE_STAR",
 "BLUE_HEART", "YELLOW_HEART", "GREEN_HEART", "ORANGE_HEART",
 "BLUE_SMILEY", "YELLOW_SMILEY", "GREEN_SMILEY", "ORANGE_SMILEY",
 "BLUE_CANDY", "YELLOW_CANDY", "GREEN_CANDY", "ORANGE_CANDY"];
 },
 getAllLocales: function() {
 return["en_GB", "sv_SE"];
 }
 });
 BaseView.extend({
 initialize: function () {
 var mixin = new Util.ErrorHandlingViewMixin(); // <--- the pixie dust
 _.extend(this, mixin);
 this.model.on("invalid", this._showErrors, this);
 },
 events: {
 "click .save-form": "saveForm",
 "focus input": "validateForm"
 },
 template: _.template(tpl),
 render: function () {
 $(this.el).html(this.template(_.defaults(this.model.toJSON(),
 {rewardIcons: this.model.getAllRewardIcons(), locales: this.model.getAllLocales()})));
 return this;
 }
 });
 });

Markup with helper functions

<div>
 <form class="form-horizontal">
 <fieldset>
 <legend><%= i18n.t('Personal information') %></legend>
 <div class="control-group"><label class="control-label"><%= i18n.t('First name') %></label>
 <div class="controls"><input name="firstName" type="text" value="<%- firstName %>">
 <div class="help-inline"></div>
 </div>
 </div>
 <div class="control-group"><label class="control-label"><%= i18n.t('Last name') %></label>
 <div class="controls"><input name="lastName" type="text" value="<%- lastName %>">
 <div class="help-inline"></div>
 </div>
 </div>
 <div class="control-group"><label class="control-label"><%= i18n.t('Description') %></label>
 <div class="controls"><input name="description" type="text" value="<%- description %>">
 <div class="help-inline"></div>
 <div class="help-block"><%= i18n.t('The description is shown to students that would like to become your mentee') %></div>
 </div>
 </div>
 </fieldset>
 <fieldset>
 <legend><%= i18n.t('Settings') %></legend>
 <div class="control-group"><label class="control-label"><%= i18n.t('Language') %></label>
 <div class="controls">
 <%= forms.select("locale", locale, "Locale", locales) %>
 </div>
 <div class="help-inline"></div>
 </div>
 <div class="control-group">
 <div class="controls">
 <%= forms.checkbox("subscribedToNewsletters", subscribedToNewsletters, i18n.t('Send me news about Nomp (about two times per month)')) %>
 </div>
 </div>
 <div class="control-group">
 <div class="controls">
 <%= forms.checkbox("subscribedToNotifications", subscribedToNotifications, i18n.t('Send me quest notification emails')) %>
 </div>
 </div>
 <div class="control-group"><label class="control-label"><%= i18n.t('Reward symbol') %></label>
 <div class="controls">
 <%= forms.select("rewardIcon", rewardIcon, "RewardIcon", rewardIcons) %>
 <span><img class="reward-icon" src="/static/img/rewardicon/<%=rewardIcon%>.png"></span>
 <div class="help-block"><%= i18n.t('The reward symbol is used in quests that you give out.') %>
 </div>
 </div>
 </div>
 </fieldset>
 <div class="form-actions">
 <button class="btn btn-primary save-form"><%= i18n.t('Save') %></button> <span class="server-error error"/>
 </div>
 </form>
 </div>

We’d love your feedback on the binding issue. If people find it useful, we’ll probably open source the small data-binding framework when we’ve used it a bit more. We think it minimizes boiler-plate code and avoids duplication without violating Backbone principles such as using Model.validate() for validation.

Let us know what you think!

An attempt at a developer friendly build pipe line

Background

I’ve spent some evenings/nights over the christmas holiday improving the deployment of Nomp.se, a site where kids can practice math for free, that we run on EC2.

The situation we had was that we deployed to the EC2 server using a locally installed Jenkins CI-server, which built the artifact (a WAR) and used the maven tomcat plugin to deploy to the local tomcat server, which was a rpm package provided by Amazon (yum install tomcat6). The setup worked pretty ok, but it was a hack. Database changes were applied and tested manually – we had a folder “sql” that contained numbered sql files that should be applied in order.

Clearly a lot of room for improvement in this area!

Goals with the new build pipeline

I wanted to reach the following goals with the new build pipeline:

  • One build from the build server all the way from my local Jenkins through test environments and into production.
  • 100% control over configuration changes of all components (Apache httpd, Apache Tomcat, MySql database), so that changes can be tested in the normal pipeline without relying on manual hacks.
  • It should be developer friendly. A developer with basic understanding of Linux, maven and tomcat should be able to make changes to and work with the build pipe line.
  • Hence, it should only rely on basic tooling (ant, maven, rpm packages) for doing the heavy lifting and use capabilities of other tools, eg Jenkins, Puppet, Capistrano as (non-critical) value add.

After a few iterations I was able to get to the following to deploy any configuration change on to a production server.

on the build server:
 $ mvn deploy

on the target server:
 # yum -y update nomp-web nomp-tomcat nomp-dbdeploy
 # cd /opt/nomp-dbdeploy; ant
 # /etc/init.d/nomp restart

That’s it. Four steps. There are no shell scripts involved. There is no rsync, there is no scp:ing of files. How I did it? Hold on, I will come to that in a minute or two :)

System configuration and prerequisites

In order to make sure the server contains the prerequisite packages and configuration I used Puppet.

“Puppet is a declarative language for expressing system configuration, a client and server for distributing it, and a library for realizing the configuration.

Rather than approaching server management by automating current techniques, Puppet reframes the problem by providing a language to express the relationships between servers, the services they provide, and the primitive objects that compose those services. Rather than handling the detail of how to achieve a certain configuration or provide a given service, Puppet users can simply express their desired configuration using the abstractions they’re used to handling, like service and node, and Puppet is responsible for either achieving the configuration or providing the user enough information to fix any encountered problems.”

from http://projects.puppetlabs.com/projects/puppet/wiki/Big_Picture

I’m not going to go into detail how to setup Puppet in this text, but here’s what I do with Puppet in order to support the build pipe line:

  • Ensure that the service accounts and groups exists on the target system
  • Ensure that software I rely on is installed (ant, apache httpd, mysqld)
  • Configuration management of a few configuration files such as httpd.conf, my.cnf etc.

Puppet config file example:

user { nomp:
 ensure => present,
 uid => 300,
 gid => 300,
 shell => '/bin/bash',
 home => '/opt/nomp',
 managehome => true,
 }
group { nomp:
 ensure => 'present',
 gid => 300
 }
package { "ant":
 ensure => "installed"
 }

The above configuration means that Puppet will ensure that the user nomp and group nomp will exist on the system and that the ant package will be installed.
I will do a whole lot more work with configuration management and provisioning with Puppet going forward, but the above is what was needed to meet my project goals.

Getting started

I started with trying package my existing WAR project as an rpm (or .deb). When Googling about for a while I found the RPM Maven Plugin (http://mojo.codehaus.org/rpm-maven-plugin/). It basically lets you build rpms using maven. The downside is that is relies on the “rpm” command installed in order to produce the final RPM from the spec file. In order to get a working maven environment on all platforms, I wrapped the rpm plugin in a maven build profile.

(Later I also found a pure java rpm tool (redline-rpm), but I haven’t looked into it yet).

The trickiest part was to get a good setup for artifact versions and RPM-release versions so that the maven release plugin could still be used without any manual changes.
The rpm-plugin has some funky defaults (http://mojo.codehaus.org/rpm-maven-plugin/ident-params.html#release) that wasn’t going to work with “yum update”.
It was a lot of experimentation, but in the end I settled for the Build Number Maven Plugin (http://mojo.codehaus.org/buildnumber-maven-plugin/).
It’s a pretty simple plugin that checks the SCM for the revision number and exposes that as a maven variable.

Here’s the RPM-part of my WAR POM:

<profiles>
  <profile>
    <id>rpm</id>
    <activation>
      <os>
        <name>linux</name>
      </os>
    </activation>
    <build>
      <plugins>
        <plugin>
          <groupId>org.codehaus.mojo</groupId>
          <artifactId>rpm-maven-plugin</artifactId>
          <version>2.1-alpha-1</version>
          <extensions>true</extensions>
          <executions>
            <execution>
              <goals>
                <goal>attached-rpm</goal>
              </goals>
            </execution>
          </executions>
          <configuration>
            <copyright>Copyright 2011 Selessia AB</copyright>
            <distribution>Nomp</distribution>
            <group>${project.groupId}</group>
            <packager>${user.name}</packager>
            <!-- need to use the build number plugin here in order for yum upgrade to work in snapshots -->
            <release>${buildNumber}</release>
            <defaultDirmode>555</defaultDirmode>
            <defaultFilemode>444</defaultFilemode>
            <defaultUsername>nomp</defaultUsername>
            <defaultGroupname>nomp</defaultGroupname>
            <requires>
              <require>nomp-tomcat</require>
            </requires>
            <mappings>
              <!-- webapps deployment -->
              <mapping>
                <directory>${rpm.install.webapps}/${project.artifactId}</directory>
                <sources>
                  <source>
                    <location>target/${project.artifactId}-${project.version}</location>
                  </source>
                </sources>
              </mapping>
            </mappings>
          </configuration>
        </plugin>
      </plugins>
    </build>
  </profile>
</profiles>

Here’s the build number plugin configuration:

<plugin>
    <groupId>org.codehaus.mojo</groupId>
    <artifactId>buildnumber-maven-plugin</artifactId>
    <version>1.0</version>
    <executions>
      <execution>
        <phase>validate</phase>
        <goals>
          <goal>create</goal>
        </goals>
      </execution>
    </executions>
    <configuration>
      <doCheck>true</doCheck>
      <doUpdate>true</doUpdate>
    </configuration>
  </plugin>

What all the configuration above does is that it adds a secondary artifact (the rpm) which gets uploaded to the Nexus maven repository on “mvn deploy”.

I don’t really need the WAR-file anymore, as I pack the RPM exploded. I might change the primary artifact type from WAR to RPM in the future, but I haven’t looked into that yet.

Packaging the app server as an RPM

The next thing I did was that I wanted to package the app server as an RPM as well. I feel it’s more developer friendly to build a tomcat rpm using maven as well rather that just grabbing some arbitrary rpm and using Puppet to fix the configuration. Also, we get full control over where it is installed and where log are.

One thing I really wanted to avoid was to having to check in the Tomcat distribution tar ball into Subversion. I hate blobs in SVN, so I was pleasantly surprised to learn that Nexus handles any types of files. I simply uploaded the latest tomcat distro tar (apache-tomcat-7.0.23.tar.gz) into my Nexus 3rd party repository.

I created a sibling project “tomcat” with a pom that looks like this:

<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
  <!-- avoid rpm here as classifier will differ and Nexus search will fail -->
  <packaging>pom</packaging>
  <modelVersion>4.0.0</modelVersion>
  <parent>
    <artifactId>nomp-parent</artifactId>
    <groupId>se.nomp</groupId>
    <version>2.1.0-SNAPSHOT</version>
  </parent>
  <artifactId>nomp-tomcat</artifactId>
  <version>0.0.1-SNAPSHOT</version>
  <name>Nomp Tomcat Server</name>
  <description>Tomcat server for Nomp</description>
  <properties>
    <tomcat.version>7.0.23</tomcat.version>
    <tomcat.build.dir>${project.build.directory}/tomcat/apache-tomcat-${tomcat.version}</tomcat.build.dir>
    <rpm.install.basedir>/opt/nomp</rpm.install.basedir>
    <rpm.install.logdir>/var/log/nomp</rpm.install.logdir>
  </properties>
  <profiles>
    <!-- Only run the RPM packaging on Linux as we need to rpm binary to build rpms using the rpm plugin -->
    <profile>
      <id>rpm</id>
      <activation>
        <os>
          <name>linux</name>
        </os>
      </activation>
      <build>
        <plugins>
          <plugin>
            <groupId>org.codehaus.mojo</groupId>
            <artifactId>rpm-maven-plugin</artifactId>
            <version>2.1-alpha-1</version>
            <extensions>true</extensions>
            <executions>
              <execution>
                <goals>
                  <goal>attached-rpm</goal>
                </goals>
              </execution>
            </executions>
            <configuration>
              <copyright>Copyright 2011 Selessia AB</copyright>
              <distribution>Nomp</distribution>
              <group>${project.groupId}</group>
              <packager>${user.name}</packager>
              <!-- need to use the build number plugin here in order for yum upgrade to work in snapshots -->
              <release>${buildNumber}</release>
              <defaultDirmode>755</defaultDirmode>
              <defaultFilemode>444</defaultFilemode>
              <defaultUsername>root</defaultUsername>
              <defaultGroupname>root</defaultGroupname>
              <mappings>
                <mapping>
                  <directory>${rpm.install.basedir}/logs</directory>
                  <sources>
                    <softlinkSource>
                      <location>${rpm.install.logdir}</location>
                    </softlinkSource>
                  </sources>
                </mapping>
                <mapping>
                  <directory>${rpm.install.logdir}</directory>
                  <username>nomp</username>
                  <groupname>nomp</groupname>
                </mapping>
                <mapping>
                  <directory>${rpm.install.basedir}/bin</directory>
                  <filemode>555</filemode>
                  <sources>
                    <source>
                      <location>${tomcat.build.dir}/bin</location>
                    </source>
                  </sources>
                </mapping>
                <mapping>
                  <directory>${rpm.install.basedir}/conf</directory>
                  <sources>
                    <source>
                      <location>${tomcat.build.dir}/conf</location>
                    </source>
                  </sources>
                </mapping>
                <mapping>
                  <directory>${rpm.install.basedir}/lib</directory>
                  <sources>
                    <source>
                      <location>${tomcat.build.dir}/lib</location>
                    </source>
                  </sources>
                </mapping>
                <mapping>
                  <directory>${rpm.install.basedir}/work</directory>
                  <username>nomp</username>
                  <groupname>nomp</groupname>
                </mapping>
                <mapping>
                  <directory>${rpm.install.basedir}/temp</directory>
                  <username>nomp</username>
                  <groupname>nomp</groupname>
                </mapping>
                <mapping>
                  <directory>${rpm.install.basedir}/conf/Catalina</directory>
                  <username>nomp</username>
                  <groupname>nomp</groupname>
                </mapping>
                <mapping>
                  <directory>/etc/init.d</directory>
                  <directoryIncluded>false</directoryIncluded>
                  <filemode>555</filemode>
                  <sources>
                    <source>
                      <location>src/main/etc/init.d</location>
                    </source>
                  </sources>
                </mapping>
              </mappings>
            </configuration>
          </plugin>
        </plugins>
      </build>
    </profile>
  </profiles>

  <build>
    <resources>
      <resource>
         <!-- overlay the contents in the resources src dir ontop of the unpacked tomcat -->
         <directory>src/main/resources</directory>
         <filtering>false</filtering>
       </resource>
     </resources>
    <plugins>
      <plugin>
        <groupId>org.codehaus.mojo</groupId>
        <artifactId>buildnumber-maven-plugin</artifactId>
        <version>1.0</version>
        <executions>
          <execution>
            <phase>validate</phase>
            <goals>
              <goal>create</goal>
            </goals>
          </execution>
        </executions>
        <configuration>
          <doCheck>true</doCheck>
          <doUpdate>true</doUpdate>
        </configuration>
      </plugin>
      <plugin>
        <artifactId>maven-clean-plugin</artifactId>
        <version>2.4.1</version>
        <executions>
          <execution>
            <id>auto-clean</id>
            <phase>initialize</phase>
            <goals>
              <goal>clean</goal>
            </goals>
          </execution>
        </executions>
      </plugin>
      <plugin>
        <groupId>org.apache.maven.plugins</groupId>
        <artifactId>maven-resources-plugin</artifactId>
        <version>2.5</version>
        <executions>
          <execution>
            <id>resources</id>
            <!-- need to specify, as this is not default for pom packaging -->
            <phase>process-resources</phase>
            <goals>
              <goal>resources</goal>
            </goals>
            <configuration>
              <encoding>UTF-8</encoding>
              <outputDirectory>${tomcat.build.dir}</outputDirectory>
            </configuration>
          </execution>
        </executions>
      </plugin>
      <plugin>
        <groupId>org.apache.maven.plugins</groupId>
        <artifactId>maven-dependency-plugin</artifactId>
        <version>2.4</version>
        <executions>
          <execution>
            <id>unpack-tomcat</id>
            <phase>generate-resources</phase>
            <goals>
              <!-- unpack the tomcat dependency that's been downloaded from your local 3rd party repo -->
              <goal>unpack-dependencies</goal>
            </goals>
            <configuration>
              <outputDirectory>${project.build.directory}/tomcat</outputDirectory>
            </configuration>
          </execution>
        </executions>
      </plugin>
    </plugins>
  </build>
  <dependencies>
    <!-- the tomcat distro that's been uploaded to the local third party maven repo -->
    <dependency>
       <groupId>org.apache.tomcat</groupId>
       <artifactId>apache-tomcat</artifactId>
       <version>${tomcat.version}</version>
       <type>tar.gz</type>
     </dependency>
   </dependencies>
</project>

Note that the Tomcat artifact is just a normal maven dependency. I used the maven-dependency-plugin to automatically unpack the archive.
I then overlay the configuration files I want to change with the well known maven-resources-plugin.

Okay. Now I was pretty happy. I was building two good RPM:s with proper version and release numbers that were deployed to my Nexus on “mvn deploy”.

Distributing the packages

The next step was then to export these files into a yum repository. Or so I thought…
I was pleasantly surprised, or more like super-excited when I realized that some awesome folks had made a plugin for Nexus (nexus-yum-plugin http://code.google.com/p/nexus-yum-plugin/) that exposes a Nexus Maven repo as a yum repo!

If you have yum installed, just add a repository configuration to your target server (I use Puppet to automate this).

Here’s how it looks:

root@manny:/etc/yum/repos.d# cat nexus-snapshot.repo
 [nexus-snapshots]
 name=Nomp Nexus - Snapshots
 baseurl=http://manny:8082/nexus/content/repositories/snapshots/
 enabled=1
 gpgcheck=0

You need to add one config for your snapshot repo and another for your release repo.
Test your setup with “yum list” (you need to redeploy at least one RPM artifact in each repo in order for the yum-plugin to create the RPM-repo).

root@manny:/etc/yum/repos.d# yum list
Installed Packages
 nomp-dbdeploy.noarch 0.0.2-1788 @maven-snapshots
 nomp-tomcat.noarch 0.0.1-1788 @maven-snapshots
 nomp-web.noarch 2.1.0-1788 @maven-snapshots
Available Packages
 nomp-dbdeploy.noarch 0.0.2-1793 maven-snapshots
 nomp-tomcat.noarch 0.0.1-1793 maven-snapshots
 nomp-web.noarch 2.1.0-1793 maven-snapshots

In order to transfer the RPM packages and install the software, you just type:

# yum -y install nomp-web

or if already installed:

# yum -y update nomp-web nomp-tomcat

Pretty sweet! It’s so easy for anyone to find out what is installed/deployed on a server using rpm packages!

The database is code too

In order to ensure that database scripts are tested throughout the deploy pipeline, we also need to treat our database scripts as code that should be run in each environment.
I like to use dbdeploy (http://code.google.com/p/dbdeploy/) for database patch script packaging. Dbdeploy is a simple Database Change Management tool that applies SQL files in a specified order.  It can be run from the command line or from ant. It has a Maven plugin as well, but I don’t want to use that as I don’t want maven installed on the production servers.

I ended up making a separate rpm with the sql change scripts for the application and packaged the maven dependencies with the rpm. The main application is a build.xml script for nomp.

The build.xml I use for the dbdeploy package looks like this:

<project name="MyProject" default="dbdeploy" basedir=".">
    <description>dbdeploy script for nomp</description>
    <record name="dbdeploy.log" loglevel="verbose" action="start" />
    <path id="dbdeploy.classpath" >
        <fileset dir="lib">
            <include name="*.jar" />
        </fileset>
    </path>

    <taskdef name="dbdeploy" classname="com.dbdeploy.AntTarget" classpathref="dbdeploy.classpath" />

    <target name="dbdeploy" depends="create-log-table">
        <dbdeploy driver="${jdbc.driverClassName}" url="${jdbc.url}" userid="${jdbc.username}" password="${jdbc.password}" dir="sql" />
    </target>

    <target name="create-log-table">
        <sql classpathref="dbdeploy.classpath" driver="${jdbc.driverClassName}" url="${jdbc.url}" userid="${jdbc.username}" password="${jdbc.password}" src="ddl/createSchemaVersionTable.mysql.sql" />
    </target>
</project>

I also improved the dbdeploy distribution mysql script a bit so that it wont fail if it’s run again and again:

CREATE TABLE IF NOT EXISTS changelog (
 change_number BIGINT NOT NULL,
 complete_dt TIMESTAMP NOT NULL,
 applied_by VARCHAR(100) NOT NULL,
 description VARCHAR(500) NOT NULL,
 CONSTRAINT Pkchangelog PRIMARY KEY (change_number)
 );

When the RPM is installed, one only runs “ant” to run the needed sql change sets.

root@manny:/opt/nomp-dbdeploy# ant
 Buildfile: /opt/nomp-dbdeploy/build.xml
create-log-table:
 [sql] Executing resource: /opt/nomp-dbdeploy/ddl/createSchemaVersionTable.mysql.sql
 [sql] 1 of 1 SQL statements executed successfully
dbdeploy:
 [dbdeploy] dbdeploy 3.0M3
 [dbdeploy] Reading change scripts from directory /opt/nomp-dbdeploy/sql...
 [dbdeploy] Changes currently applied to database:
 [dbdeploy] 1, 2
 [dbdeploy] Scripts available:
 [dbdeploy] 1, 2
 [dbdeploy] To be applied:
 [dbdeploy] (none)
BUILD SUCCESSFUL
 Total time: 0 seconds

Final step – setting up Jenkins

I will assume that the reader knows how to setup and configure Jenkins jobs. I did a vanilla Jenkins install, and added the build pipeline plugin (https://wiki.jenkins-ci.org/display/JENKINS/Build+Pipeline+Plugin) for a nice gui and the manual triggers.

My pipeline

The pipeline runs automatically for each check in.

Job #1 – “Nomp build”

Builds the root pom with goal “deploy”. Note: add flags for -Dusername -Dpassword for svn credentials as the build-number-plugin is used)

Job #2 – “Nomp deploy to test”

ssh jenkins@test-server "yum -y update nomp-web nomp-tomcat nomp-dbdeploy;
cd /opt/nomp-dbdeploy; ant; /etc/init.d/nomp restart"

note: you need to add jenkins to sudoers (using the NOPASSWD option) on the target and use ssh key auth of course (Puppet does this for me).

Job #3 – “Nomp deploy to production” (manual trigger)

A manual step after smoke tests have been run (not automated for Nomp yet), to release to production. Exactly like the above, except different target server.

Next steps

For Nomp, the next step will be more Puppet config. I want to be able to build and start up a fully working web server and db server from a standard EC2 AMI without any manual steps. This isn’t hard, but I can’t find the time right now. Need to add new features to the customers too :) After that, I’d love to look at using Capistrano (https://github.com/capistrano/capistrano/wiki) for deploy automation to many hosts. Currently Nomp only has a few servers, so ssh from Jenkins works fine for now.

Thank you for reading all the way to here. I’d love feedback if you think this is useful or not and if you agree on it being “developer friendly”. I have a pretty solid background in *nix admin, but I think most developers will understand and be able to maintain this setup, as compared to a solution more focused on using a sysadmin’s toolbox.

Lastly, please contribute with improvements if you find any.

I’ll try find time and energy to clean up the pom:s and provide a skeleton project that has a simple war, a tomcat and the dbdeploy rpm config for download in a week or so.

Added: Here’s an overview of the current continuous deployment environment at Nomp.se

Nomp Continous Deployment architecture

(click for full size image)

Backing up EC2 MySQL and configuration files to S3

I’ve been spending a few hours setting up a good back-up strategy for my EC2 server, running NOMP.se.

The service runs on a single reserved small instance at present. It’s using Amazon’s Linux distro with an Elastic Block Storage (EBS) root disk.

The first thing you should do after setting up an EC2 host is to make an EBS snapshot. An EBS snapshot is a full disk device dump (like “dd” produces if you’re a Unix hacker). While EBS snapshots are a great feature, and should be a cornerstone in any EC2 backup strategy, they are full volume dumps, and hence take a lot space.

To compliment my EBS snapshots, which I run manually before and after bigger changes (yum update, package installs etc), I hacked together a little shell script in 1337 bytes (really) to backup my MySQL databases in a supported manner (mysqldump) and also backing up a number of configuration files from the file system. The script makes use of a great tool called s3cmd which is used to upload files to S3 (Amazon’s Simple Storage Service).

How to set up the script (all steps as root):

  1. Install s3cmd
  2. Run s3cmd –configure
  3. Copy the generated .s3cfg file to /etc
  4. Download the S3 backup script to /etc/cron.daily/
  5. Edit the script to suit your needs.

I hope someone finds this useful!

The s3 console after a successful run

Here’s what the script looks like:

## Specify data base schemas to backup and credentials
DATABASES="nompdb wp_blog"

## Syntax databasename as per above _USER and _PW
## _USER is mandatory _PW is optional
nompdb_USER=root
wp_blog_USER=root

## Specify directories to backup (it's clever to use relaive paths)
DIRECTORIES="root etc/cron.daily etc/httpd etc/tomcat6 tmp/jenkinsbackup" 

## Initialize some variables
DATE=$(date +%Y%m%d)
DATETIME=$(date +%Y%m%d-%H%m)
BACKUP_DIRECTORY=/tmp/backups
S3_CMD="/usr/bin/s3cmd --config /etc/s3cfg"

## Specify where the backups should be placed
S3_BUCKET_URL=s3://nomp-backup/$DATE/

## The script
cd /
mkdir -p $BACKUP_DIRECTORY
rm -rf $BACKUP_DIRECTORY/*

## Backup MySQL:s
for DB in $DATABASES
do
BACKUP_FILE=$BACKUP_DIRECTORY/${DATETIME}_${DB}.sql
if [ ! -n "${DB}_PW"  ]
then
  PASSWORD=$(eval echo \$${DB}_PW)
  USER=$(eval echo \$${DB}_USER)
  /usr/bin/mysqldump -v --user $USER --password $PASSWORD -h localhost -r $BACKUP_FILE $DB 2>&1
else
  /usr/bin/mysqldump -v --user $USER -h localhost -r $BACKUP_FILE $DB 2>&1
fi
/bin/gzip $BACKUP_FILE 2>&1
$S3_CMD put ${BACKUP_FILE}.gz $S3_BUCKET_URL 2>&1
done

## Backup of config directories
for DIR in $DIRECTORIES
do
BACKUP_FILE=${DATETIME}_$(echo $DIR | sed 's/\//-/g').tgz
/bin/tar zcvf ${BACKUP_FILE} $DIR 2>&1
$S3_CMD put ${BACKUP_FILE} $S3_BUCKET_URL 2>&1
done

 

Freemarker, slf4j and spring

I’ve just spent three hours trying to get Freemarker to stop spitting out “DEBUG cache:81″ messages in my Spring application.

Freemarker recently hacked in SLF4J support into 2.3, but I had a hard time finding out how to enable it, so I reckoned I’d share my experiences.

FreeMarker 2.3 looks for logging libraries in this order (by default) with the class-loader of the FreeMarker classes: Log4J, Avalon, java.util.logging. The first that it founds in this list will be the one used for logging.

I found out that you can override this behavior in 2.3.18 by calling:

freemarker.log.Logger.
    selectLoggerLibrary(freemarker.log.Logger.LIBRARY_SLF4J);

However, this code need to run before any Freemarker classes are initialized.

After trying a few different tricks, such as having a load-on-startup Servlet’s init() configure the logger, I ended up with a fairly clean solution.

I extended Spring’s FreeMarkerConfigurer class like this:

 public class PluxFreeMarkerConfigurer extends FreeMarkerConfigurer {
    private Logger logger = LoggerFactory
            .getLogger(PluxFreeMarkerConfigurer.class);

    @Override
    public void afterPropertiesSet() throws IOException, TemplateException {
        fixFreemarkerLogging();
        super.afterPropertiesSet();
    }

    private void fixFreemarkerLogging() {
        try {
            freemarker.log.Logger
              .selectLoggerLibrary(freemarker.log.Logger.LIBRARY_SLF4J);
            logger.info("Switched broken Freemarker logging to slf4j");
        } catch (ClassNotFoundException e) {
            logger.warn("Failed to switch broken Freemarker logging to slf4j");
        }
    }
}

and changed my Spring-config to use my class to initialize Freemarker instead:

  <!-- FreeMarker engine that configures Freemarker for SLF4J-->
  <bean id="freemarkerConfig" class="com.selessia.plux.web.PluxFreeMarkerConfigurer"
 ...
 </bean>

Hope this helps someone.

Speedment – Snake-oil caching

It’s not everyday that people walk into our office claiming to be 1000x faster than the competition. Especially not in the highly competitive landscape of data-caching, where we have some big names in technology present for 5+ years, such as Terracotta, Oracle, Gigaspaces et.c.

This is what Speedment did.

Speedment is basically a non-coherent, non-shardable, read-only, write-through java-cache that can use off-heap storage, much like EhCache with big memory. However, you need to rewrite your application in leverage the cache to Speedment’s own API:s. I fail to see what makes it even remotely attractive compared to the competition. It leverages database triggers to keep the caches up to date, which I would guess hurts database write performance.

According to Speedment’s web site (only available in Swedish) they are in the “Elastic Caching Platform”-business and they got funding from Första Entreprenörsfonden and from ALMI Invest. I feel truly sorry for these investors, as some technical due diligence could have saved them some money. It’s not that Speedment is all bad, it’s just not very good compared to the competition (including the FOSS competition).

Rather than an Elastic Caching Platform, I consider Speedment to be a Snake-oil caching platform.

There is a PDF in English here if you want to check out the sales pitch.

Unibet Privacy Proxy for Firefox and Internet Explorer

A little over a year ago, I came up with a neat idea how to bypass any potential blocking of Unibet’s websites.

  1. As of today, we’re running this in production and there is an updated version of the Firefox add-on.
  2. The big news for Internet Explorer users, and the users of our Poker software, just run this script active the proxy settings to ensure functionality.

I hope you will find this useful!

Domain Event Driven Architecture

While working on my presentation for Qcon London 2010, I came to the following conclusions:

  1. SOA is all about dividing domain logic into separate systems and exposing it as services
  2. Some domain logic will, by its very nature, be spread out over many systems
  3. The result is domain pollution and bloat in the SOA systems

Domain EDA: By exposing relevant Domain Events on a shared event bus we can isolate cross cutting functions to separate systems

  • SOA+Domain EDA will reduce time-to-market for new functionality
  • SOA+Domain EDA will enable a layer of high-value services that have a visible impact on the bottom line of the business

Here is the full presentation: