My adventures in front-end-land

My adventures in front-end-land or two backbone apps later… Lessons learned.

I have to admit, I’ve been a secret admirer of great front end developers. I thought my brain wasn’t wired for HTML/JS/CSS.
My wife Marianne and I started a SaaS about a year ago aimed at off-loading teachers in schools teaching math for kids aged 6-12 years old. You find the service at
The service makes it a breeze to give out personalized assignments and to follow them up. All content is automatically generated, so the teacher doesn’t have to produce the assignments either. The service is free for all kids to use. If you have a subscription you can do the teacher/parental thing.

When you try to run a bootstrapped (self-funded) business you have to roll up your sleeves and just get things done. Often there are things that you haven’t done before. Marianne and I are Java developers by trade. I have a background in Unix sys-admin but haven’t done that professionally in 12 years. Instead I’ve been coding Java, with a break between 2007-2011 when I was Chief Architect for Unibet. It was a very challenging and a learning experience technically, but there is no time for coding when you’re constantly running or governing key transformation projects and trying to be a line manager at the same time.

Some of the jobs that need to be performed in our startup include:

  • Business development / requirements management
  • Back-end developer (Java/Spring)
  • Front-end developer (HTML/CSS/LESS, jQuery/Underscore/Backbone)
  • Quality assurance (testing all the different browsers etc)
  • Systems administrator (Amazon Web Services: EC2/RDS/R53/SES/SQS/Cloudfront, Apache/Tomcat perf tuning, Security patching)
  • Database administrator (Tuning, backup strategy)
  • Configuration management (Maven/Jenkins/RPM-packaging and deploy automation)
  • Sales and marketing (Social media campaigns and online ads)
  • Finance (book keeping, invoicing)
  • Customer service (email support)

As both Marianne and I still have our daytime jobs to manage, there is a lot of late nights… but also a lot of fun!

The first versions of our Math SaaS was a “traditional” server-side rendered application using Spring MVC and Freemarker for templating.
After the initial launch, we spent some time looking at rewriting the most heavily used part – the student quiz – in Javascript in order to improve user experience. We didn’t know much about jQuery, Javascript or Ajax at this point.

The first rewrite did the job. It off-loaded the server and improved the user experience in terms of response times, as eleven server round trips was reduced to two (start the quiz and submit result). However, it was unmaintainable due to a number of things:

  1. No name spacing or modules (all functions in the global space)
  2. Adding a new quiz type involved changing the core quiz logic which meant that stuff broke
  3. The DOM manipulation was spread out all over the javascript code
  4. Client side templating of the quizzes was done with custom code
  5. We had two implementations, one for the desktop and another for mobile (jQuery mobile)
  6. The markup was a mess, without good structure and class names
  7. HTML5 canvas code was too low level to be productive

So after spending some additional months doing other stuff, we increasingly felt that the solution had grown out of hand.
We then spent some time researching the front-end open source space and found some interesting contributions:


Backbone is a tiny MVP (model-view-presenter) framework for developing client side applications in Javascript.
For better and for worse, its a tiny frame work, only about 7000 bytes to transfer minified/gzipped.

Twitter Bootstrap

Bootstrap is a HTML/CSS framework with standardized markup and ready-made components that can be customizable. It supports responsive design so that the same site can be used for mobile, tablet and desktop.


KineticJS is an HTML5 Canvas JavaScript framework that enables high performance animations, transitions, node nesting, layering, filtering, caching, event handling for desktop and mobile applications, and much more.

We then started to rewrite the quiz engine once again using Bootstrap and it was quite a struggle, being javascript rookies.
Backbone is elegant, but there is a lack of good tutorials beyond the trivial hello world examples. I learned a lot from Christophe Coenraets blog post: even though it’s very simple (in retrospect).

We also struggled with things I take for granted in other frameworks, such as a best-practice project structure and naming practices.

Never the less, we shipped a new quiz engine in backbone/bootstrap for in less than a month, which was pretty ok considering our skill level at the time. Was it perfect? No, but it was a huge improvement. Now we can extend the quiz engine with new quizzes quickly in a modular way.

The next evolution was a complete rewrite of the teacher/parent backoffice, which was pretty limited from a functionality point of view and a horrible user-experience to be honest.

This was a considerably bigger effort with twenty odd views and perhaps twice as many use-cases.
In this case we felt it was necessary to use a module system (and a module loader) in order to track dependencies between components.

Require.js does a pretty good job at this, but we felt the documentation is hard to follow. It comes with an optimizer to minify and combine javascript. Not too many people seem to be integrating javascript into their Maven build either, so it takes a while to find a good maven plugin that allows you to run javascript from Maven.

When it comes to data-binding, we started to use Backbone.Forms, but we quickly felt it was overly complicated and not always designed by be extended. I18n wasn’t supported in a good way either.

To minimize duplication, we ended up rolling our own minimal data binding solution that consists of a Backbone model mixin, a view mixin and a few JS helpers.

Here’s how the resulting code looks when using the mixins. The code implements a form with client-side and server side validation. Both types of errors are displayed in the form,

Backbone Model and View

 urlRoot: "/api/profile",
 initialize: function() {
 var mixin = new Util.ErrorHandlingMixin(); // <--- the pixie dust
 _.extend(this, mixin);
 validate: function(attributes, options) {
 var errors = new Array();
 if (this.get('firstName') == null || this.get('firstName').length < 2) {
 errors.push({ attr: 'firstName', error: 'You need to provide at least two characters' });
 if (this.get('lastName') == null || this.get('lastName').length < 2) {
 errors.push({ attr: 'lastName', error: 'You need to provide at least two characters' });
 if (this.get('rewardIcon') == null) {
 errors.push({ attr: 'rewardIcon', error: 'Mandatory field' });
 if (errors.length > 0) {
 return errors;
 } else {
 return null;
 getAllRewardIcons: function() {
 getAllLocales: function() {
 return["en_GB", "sv_SE"];
 initialize: function () {
 var mixin = new Util.ErrorHandlingViewMixin(); // <--- the pixie dust
 _.extend(this, mixin);
 this.model.on("invalid", this._showErrors, this);
 events: {
 "click .save-form": "saveForm",
 "focus input": "validateForm"
 template: _.template(tpl),
 render: function () {
 {rewardIcons: this.model.getAllRewardIcons(), locales: this.model.getAllLocales()})));
 return this;

Markup with helper functions

 <form class="form-horizontal">
 <legend><%= i18n.t('Personal information') %></legend>
 <div class="control-group"><label class="control-label"><%= i18n.t('First name') %></label>
 <div class="controls"><input name="firstName" type="text" value="<%- firstName %>">
 <div class="help-inline"></div>
 <div class="control-group"><label class="control-label"><%= i18n.t('Last name') %></label>
 <div class="controls"><input name="lastName" type="text" value="<%- lastName %>">
 <div class="help-inline"></div>
 <div class="control-group"><label class="control-label"><%= i18n.t('Description') %></label>
 <div class="controls"><input name="description" type="text" value="<%- description %>">
 <div class="help-inline"></div>
 <div class="help-block"><%= i18n.t('The description is shown to students that would like to become your mentee') %></div>
 <legend><%= i18n.t('Settings') %></legend>
 <div class="control-group"><label class="control-label"><%= i18n.t('Language') %></label>
 <div class="controls">
 <%="locale", locale, "Locale", locales) %>
 <div class="help-inline"></div>
 <div class="control-group">
 <div class="controls">
 <%= forms.checkbox("subscribedToNewsletters", subscribedToNewsletters, i18n.t('Send me news about Nomp (about two times per month)')) %>
 <div class="control-group">
 <div class="controls">
 <%= forms.checkbox("subscribedToNotifications", subscribedToNotifications, i18n.t('Send me quest notification emails')) %>
 <div class="control-group"><label class="control-label"><%= i18n.t('Reward symbol') %></label>
 <div class="controls">
 <%="rewardIcon", rewardIcon, "RewardIcon", rewardIcons) %>
 <span><img class="reward-icon" src="/static/img/rewardicon/<%=rewardIcon%>.png"></span>
 <div class="help-block"><%= i18n.t('The reward symbol is used in quests that you give out.') %>
 <div class="form-actions">
 <button class="btn btn-primary save-form"><%= i18n.t('Save') %></button> <span class="server-error error"/>

We’d love your feedback on the binding issue. If people find it useful, we’ll probably open source the small data-binding framework when we’ve used it a bit more. We think it minimizes boiler-plate code and avoids duplication without violating Backbone principles such as using Model.validate() for validation.

Let us know what you think!

Backing up EC2 MySQL and configuration files to S3

I’ve been spending a few hours setting up a good back-up strategy for my EC2 server, running

The service runs on a single reserved small instance at present. It’s using Amazon’s Linux distro with an Elastic Block Storage (EBS) root disk.

The first thing you should do after setting up an EC2 host is to make an EBS snapshot. An EBS snapshot is a full disk device dump (like “dd” produces if you’re a Unix hacker). While EBS snapshots are a great feature, and should be a cornerstone in any EC2 backup strategy, they are full volume dumps, and hence take a lot space.

To compliment my EBS snapshots, which I run manually before and after bigger changes (yum update, package installs etc), I hacked together a little shell script in 1337 bytes (really) to backup my MySQL databases in a supported manner (mysqldump) and also backing up a number of configuration files from the file system. The script makes use of a great tool called s3cmd which is used to upload files to S3 (Amazon’s Simple Storage Service).

How to set up the script (all steps as root):

  1. Install s3cmd
  2. Run s3cmd –configure
  3. Copy the generated .s3cfg file to /etc
  4. Download the S3 backup script to /etc/cron.daily/
  5. Edit the script to suit your needs.

I hope someone finds this useful!

The s3 console after a successful run

Here’s what the script looks like:

## Specify data base schemas to backup and credentials
DATABASES="nompdb wp_blog"

## Syntax databasename as per above _USER and _PW
## _USER is mandatory _PW is optional

## Specify directories to backup (it's clever to use relaive paths)
DIRECTORIES="root etc/cron.daily etc/httpd etc/tomcat6 tmp/jenkinsbackup" 

## Initialize some variables
DATE=$(date +%Y%m%d)
DATETIME=$(date +%Y%m%d-%H%m)
S3_CMD="/usr/bin/s3cmd --config /etc/s3cfg"

## Specify where the backups should be placed

## The script
cd /

## Backup MySQL:s
if [ ! -n "${DB}_PW"  ]
  PASSWORD=$(eval echo \$${DB}_PW)
  USER=$(eval echo \$${DB}_USER)
  /usr/bin/mysqldump -v --user $USER --password $PASSWORD -h localhost -r $BACKUP_FILE $DB 2>&1
  /usr/bin/mysqldump -v --user $USER -h localhost -r $BACKUP_FILE $DB 2>&1
/bin/gzip $BACKUP_FILE 2>&1
$S3_CMD put ${BACKUP_FILE}.gz $S3_BUCKET_URL 2>&1

## Backup of config directories
BACKUP_FILE=${DATETIME}_$(echo $DIR | sed 's/\//-/g').tgz
/bin/tar zcvf ${BACKUP_FILE} $DIR 2>&1


Speedment – Snake-oil caching

It’s not everyday that people walk into our office claiming to be 1000x faster than the competition. Especially not in the highly competitive landscape of data-caching, where we have some big names in technology present for 5+ years, such as Terracotta, Oracle, Gigaspaces et.c.

This is what Speedment did.

Speedment is basically a non-coherent, non-shardable, read-only, write-through java-cache that can use off-heap storage, much like EhCache with big memory. However, you need to rewrite your application in leverage the cache to Speedment’s own API:s. I fail to see what makes it even remotely attractive compared to the competition. It leverages database triggers to keep the caches up to date, which I would guess hurts database write performance.

According to Speedment’s web site (only available in Swedish) they are in the “Elastic Caching Platform”-business and they got funding from Första Entreprenörsfonden and from ALMI Invest. I feel truly sorry for these investors, as some technical due diligence could have saved them some money. It’s not that Speedment is all bad, it’s just not very good compared to the competition (including the FOSS competition).

Rather than an Elastic Caching Platform, I consider Speedment to be a Snake-oil caching platform.

There is a PDF in English here if you want to check out the sales pitch.

Domain Event Driven Architecture

While working on my presentation for Qcon London 2010, I came to the following conclusions:

  1. SOA is all about dividing domain logic into separate systems and exposing it as services
  2. Some domain logic will, by its very nature, be spread out over many systems
  3. The result is domain pollution and bloat in the SOA systems

Domain EDA: By exposing relevant Domain Events on a shared event bus we can isolate cross cutting functions to separate systems

  • SOA+Domain EDA will reduce time-to-market for new functionality
  • SOA+Domain EDA will enable a layer of high-value services that have a visible impact on the bottom line of the business

Here is the full presentation:

Looking for mentor!

I live for learning new things and hence I have been thinking for quite some time of getting a mentor.
  • My current career goal is CTO/CIO at a company where IT is considered a strategic investment.
  • I am interested in learning more about the strategical and day-to-day challenges of a C-level executive or a board member.
  • I currently have experience from the e-gaming, finance, banking, insurance, government and IT consulting industries. I have been self-employed two times, for a total of seven years. I consider myself an entrepreneurial spirit.
  • I have over 17 years of hands-on IT experience. I have worked with all aspects of IT (from operations, IT security and systems development to procurement, software architecture and enterprise architecture).
Would you know anyone interested in mentoring of a business-minded technical expert such as myself?


A giant hurdle for buying a system/solution as a software is the need to buy hardware, install it, configure and manage it. You need to train people on the products’ operational aspects and retain that skill within the company.

(Free) Open Source Software (FOSS) is great to spread, to get adoption and support for a product. You enable the developers and architects to play around with the stuff! The real challenge for FOSS (and other software) products is to go beyond the happy and content developer and also provide a painless path for the adopters to provide business value without a huge investment hurdle in terms of hardware, software, traning or services.

I think the reason why something like Google Analytics or is successful is that it is extremely painless to start using it. You can focus on the business problem rather than the IT stuff. Obviously this is nothing new, and the examples I gave has been around for years. Software as a Service is great.

Then, you have all the talk about the real-time web and putting information quickly, as it happens – “real time” – on the users’ desktops. This is what Twitter and Facebook is about, but real-time web is also needed for e-commerce and gaming and a lot of other areas. There are even conferences about it, so it must be happening 😉

Lastly, the final piece of the puzzle are Service Level Agreements. In order to provide “real time web” messaging as a service there is a clear advantage of being close to the information consumers, both in terms of scaling out and in terms of guaranteed latency. I think it is going to be hard to commit meaningful SLA:s without being in the edge.

If you remove the need to invest in infrastructure, the need to train people on the operational aspects and then get excellent scalability and low latency guaranteed by contract, I’d buy it in a second. Who will provide me with the Real Time Web as a service?

Open Source strategy at

Just this week we made a tough call between a fairly proven commercial solution and a mix of new, fun, exciting and (fairly) unproven open source for messaging and last mile push technology. We went for the latter. Why?

To be honest, it came down to a gut-feeling decision. Would I prefer working for a company that used proven, stable commercial software – or would I prefer a company that thought it could get a competitive edge by using something new (and cool)?

I believe that in order to attract talent, we need to use cool, open source, technology.

On the way to work this morning I felt I should put my thoughts around our architectural strategy in writing. Here is what I came up with:

We will always favor free, open source software (FOSS) as components in our architecture.

Free as in “freedom of speech”
While we do not mind paying for consultancy services and quality support, it is important for us to avoid vendor lock-in, and any software we use should have a right-to-use license without any cost attached.

Open source software and open standards should always be our first choice.

Commercial, propriatary software need to show exceptional business value (over Free solutions) in order to be considered.

We will strive to contribute to the community by buying support from a company backing a FOSS solution or paying for product improvements that will also benefit the community.

These are the guiding principles for all software used at Unibet.

I’ll close with a quote:

Unibet has the most exciting, up-to-date architecture I have ever seen at any company.
— Jonas Bonér

Have you walked down the ORM road of death?

A friend of mine asked me a really good question tonight:

Hey Stefan,
It would be great if you could please give me a sense for how many development teams get hit by a database bottleneck in JEE / Java / 3-tier / ORM / JPA land? And, how they go about addressing it? What exactly causes their bottleneck?

I think most successful apps – scaling problems are hopefully a sign that people are actually using the stuff, right? – built with Hibernate/JPA hit db contention pretty early on. From what I’ve seen this is usually caused by doing excessive round-trips over the wire or returning too large data sets.

And then we spend time fixing all the obvious broken data access patterns, by first to use HQL over standard eager/lazy fetching, or tuning existing HQL and then direct SQL if needed.

I believe the next step after this is typically to try to scale vertically, both in the db and app tier. Throwing more hardware at the problem may get us quite a bit further at this point.

Then we might get to the point where the app gets fixed so that it actually makes sense to scale horizontally in the app tier. We will probably have to add a load balancer to the mix and use sticky sessions by now.

And then then we will perhaps find out that we will not do that very well without a distributed 2nd level cache, and that all our direct SQL code writing to the DB (that bypass the 2nd level cache) won’t allow us to use a 2nd level cache for reads either…

Here is where I think there are many options and I’m not sure how people tend go from here. Here we might see some people abandoning ORM, while others may try to get the 2nd level cache to work?

Are these the typical steps for scaling up a Java Hibernate/JPA app? What’s your experience?

Web pages are disappearing?

I believe the page (url) is becoming more of a task oriented landing area where the web site will adopt the contents to the requesting user’s needs. I believe the divorce between content and pages is inevitable. It will be interesting to see how this will affect the KPI:s, analytics tools we currently use and search engine optimization practices going forward.

I recently attended a breakfast round-table discussion hosted by Imad Mouline. Imad is the Chief Technology Officer of Gomez. For those who aren’t familiar with Gomez, they specialize in web performance monitoring. It was an interesting discussion with participants from a few different industries. Participants were either CTO:s or CTO direct reports.

Imad shared a few additional trends regarding web pages (aggregated from the Gomez data warehouse):

  • Page weight is increasing (kB/page)
  • The number of page objects are plateauing
  • The number of origin domains per page are increasing

We covered a few different topics, but the most interesting discussion (to me) was related to how web pages are being constructed in modern web sites and what impact this has on measuring service level key performance indicators (KPI:s).

In order to sell effectively you need to create a web site that really stands out. One of the more effective ways of doing this is to use what we know about the user to contribute to this experience.

In general we tend to know a few things about each site visitor:

  • What browsing device is the user using (agent http header)
  • Where the user is (geo-ip lookup)
  • What the user’s preferred language is (browser setting or region)
  • Is the user is a returning customer or not (cookie)
  • The identity of the customer (cookie) and hence possibly age, gender, address etc 🙂
  • What time of day it is

So we basically know the how, who, when, where and what’s. In addition to this we can use data from previous visits to our site, such as click stream analysis, order history or segmentation by data warehouse analysis fed back into the content delivery system to improve the customer experience.

For example, when a user visits our commerce site we can use all of the above to present the most relevant offers in a very targeted manner to that user. We can also cross-sell efficiently and offer bonuses if we think there is a risk of this being a lapsing customer. We can adapt to the user’s device and create a different experience depending on if the user is visiting in the afternoon or late night.

If we do a good job with our one-to-one sales experience, the components and contents delivered on a particular page (url) will in other words vary depending on who’s requesting it, from where the user is requesting it, what device is used, and what time it is. Depending on the application and the level of personalization, this will obviously impact both the non-functional and functional KPI:s: What is the conversion rate for the page? What is the response time for the page?


I am a long time fan of Robert X. Cringely and I was looking forward to his comments on the Oracle/Sun debacle. Here’s what he said in his blog – I couldn’t agree more:

it ends with the heart of Sun moving a few miles up 101 to where it will certainly die.

But for the most part what Oracle will do with Sun is show a quick and dirty profit by slashing and burning at a produgious rate, cutting the plenty of fat (and a fair amount of muscle) still at Sun. If you read the Oracle press release, the company is quite confident it is going to make a lot of money on this deal starting right away. How can they be so sure?
It’s easy. First drop all the bits of Sun that don’t make money. Then drop all the bits that don’t fit in Oracle’s strategic vision. Bring the back office entirely into Redwood Shores. The cut what overhead is left to match the restructured business. Sell SPARQ to some Asian OEM. Cut R&D by 80 percent, saving $2.4 billion per year. I’m guessing sell StorageTek, maybe even to IBM. And on and on. Gut Sun and milk what remains.


Regarding my previous post – I think that the acquisition is the start of a long death process for Java open source. I do not expect Oracle to announce the death of anything, but it will never the less die unless fully embraced by Oracle. The sun will surely set on Glassfish and the rest of the projects that doesn’t make any money for Sun, nor is of strategic interest to Oracle.