A giant hurdle for buying a system/solution as a software is the need to buy hardware, install it, configure and manage it. You need to train people on the products’ operational aspects and retain that skill within the company.

(Free) Open Source Software (FOSS) is great to spread, to get adoption and support for a product. You enable the developers and architects to play around with the stuff! The real challenge for FOSS (and other software) products is to go beyond the happy and content developer and also provide a painless path for the adopters to provide business value without a huge investment hurdle in terms of hardware, software, traning or services.

I think the reason why something like Google Analytics or Salesforce.com is successful is that it is extremely painless to start using it. You can focus on the business problem rather than the IT stuff. Obviously this is nothing new, and the examples I gave has been around for years. Software as a Service is great.

Then, you have all the talk about the real-time web and putting information quickly, as it happens – “real time” – on the users’ desktops. This is what Twitter and Facebook is about, but real-time web is also needed for e-commerce and gaming and a lot of other areas. There are even conferences about it, so it must be happening 😉

Lastly, the final piece of the puzzle are Service Level Agreements. In order to provide “real time web” messaging as a service there is a clear advantage of being close to the information consumers, both in terms of scaling out and in terms of guaranteed latency. I think it is going to be hard to commit meaningful SLA:s without being in the edge.

If you remove the need to invest in infrastructure, the need to train people on the operational aspects and then get excellent scalability and low latency guaranteed by contract, I’d buy it in a second. Who will provide me with the Real Time Web as a service?

Open Source strategy at Unibet.com

Just this week we made a tough call between a fairly proven commercial solution and a mix of new, fun, exciting and (fairly) unproven open source for messaging and last mile push technology. We went for the latter. Why?

To be honest, it came down to a gut-feeling decision. Would I prefer working for a company that used proven, stable commercial software – or would I prefer a company that thought it could get a competitive edge by using something new (and cool)?

I believe that in order to attract talent, we need to use cool, open source, technology.

On the way to work this morning I felt I should put my thoughts around our architectural strategy in writing. Here is what I came up with:

We will always favor free, open source software (FOSS) as components in our architecture.

Free as in “freedom of speech”
While we do not mind paying for consultancy services and quality support, it is important for us to avoid vendor lock-in, and any software we use should have a right-to-use license without any cost attached.

Open source software and open standards should always be our first choice.

Commercial, propriatary software need to show exceptional business value (over Free solutions) in order to be considered.

We will strive to contribute to the community by buying support from a company backing a FOSS solution or paying for product improvements that will also benefit the community.

These are the guiding principles for all software used at Unibet.

I’ll close with a quote:

Unibet has the most exciting, up-to-date architecture I have ever seen at any company.
— Jonas Bonér

Have you walked down the ORM road of death?

A friend of mine asked me a really good question tonight:

Hey Stefan,
It would be great if you could please give me a sense for how many development teams get hit by a database bottleneck in JEE / Java / 3-tier / ORM / JPA land? And, how they go about addressing it? What exactly causes their bottleneck?

I think most successful apps – scaling problems are hopefully a sign that people are actually using the stuff, right? – built with Hibernate/JPA hit db contention pretty early on. From what I’ve seen this is usually caused by doing excessive round-trips over the wire or returning too large data sets.

And then we spend time fixing all the obvious broken data access patterns, by first to use HQL over standard eager/lazy fetching, or tuning existing HQL and then direct SQL if needed.

I believe the next step after this is typically to try to scale vertically, both in the db and app tier. Throwing more hardware at the problem may get us quite a bit further at this point.

Then we might get to the point where the app gets fixed so that it actually makes sense to scale horizontally in the app tier. We will probably have to add a load balancer to the mix and use sticky sessions by now.

And then then we will perhaps find out that we will not do that very well without a distributed 2nd level cache, and that all our direct SQL code writing to the DB (that bypass the 2nd level cache) won’t allow us to use a 2nd level cache for reads either…

Here is where I think there are many options and I’m not sure how people tend go from here. Here we might see some people abandoning ORM, while others may try to get the 2nd level cache to work?

Are these the typical steps for scaling up a Java Hibernate/JPA app? What’s your experience?

Web pages are disappearing?

I believe the page (url) is becoming more of a task oriented landing area where the web site will adopt the contents to the requesting user’s needs. I believe the divorce between content and pages is inevitable. It will be interesting to see how this will affect the KPI:s, analytics tools we currently use and search engine optimization practices going forward.

I recently attended a breakfast round-table discussion hosted by Imad Mouline. Imad is the Chief Technology Officer of Gomez. For those who aren’t familiar with Gomez, they specialize in web performance monitoring. It was an interesting discussion with participants from a few different industries. Participants were either CTO:s or CTO direct reports.

Imad shared a few additional trends regarding web pages (aggregated from the Gomez data warehouse):

  • Page weight is increasing (kB/page)
  • The number of page objects are plateauing
  • The number of origin domains per page are increasing

We covered a few different topics, but the most interesting discussion (to me) was related to how web pages are being constructed in modern web sites and what impact this has on measuring service level key performance indicators (KPI:s).

In order to sell effectively you need to create a web site that really stands out. One of the more effective ways of doing this is to use what we know about the user to contribute to this experience.

In general we tend to know a few things about each site visitor:

  • What browsing device is the user using (agent http header)
  • Where the user is (geo-ip lookup)
  • What the user’s preferred language is (browser setting or region)
  • Is the user is a returning customer or not (cookie)
  • The identity of the customer (cookie) and hence possibly age, gender, address etc 🙂
  • What time of day it is

So we basically know the how, who, when, where and what’s. In addition to this we can use data from previous visits to our site, such as click stream analysis, order history or segmentation by data warehouse analysis fed back into the content delivery system to improve the customer experience.

For example, when a user visits our commerce site we can use all of the above to present the most relevant offers in a very targeted manner to that user. We can also cross-sell efficiently and offer bonuses if we think there is a risk of this being a lapsing customer. We can adapt to the user’s device and create a different experience depending on if the user is visiting in the afternoon or late night.

If we do a good job with our one-to-one sales experience, the components and contents delivered on a particular page (url) will in other words vary depending on who’s requesting it, from where the user is requesting it, what device is used, and what time it is. Depending on the application and the level of personalization, this will obviously impact both the non-functional and functional KPI:s: What is the conversion rate for the page? What is the response time for the page?


I am a long time fan of Robert X. Cringely and I was looking forward to his comments on the Oracle/Sun debacle. Here’s what he said in his blog – I couldn’t agree more:

it ends with the heart of Sun moving a few miles up 101 to where it will certainly die.

But for the most part what Oracle will do with Sun is show a quick and dirty profit by slashing and burning at a produgious rate, cutting the plenty of fat (and a fair amount of muscle) still at Sun. If you read the Oracle press release, the company is quite confident it is going to make a lot of money on this deal starting right away. How can they be so sure?
It’s easy. First drop all the bits of Sun that don’t make money. Then drop all the bits that don’t fit in Oracle’s strategic vision. Bring the back office entirely into Redwood Shores. The cut what overhead is left to match the restructured business. Sell SPARQ to some Asian OEM. Cut R&D by 80 percent, saving $2.4 billion per year. I’m guessing sell StorageTek, maybe even to IBM. And on and on. Gut Sun and milk what remains.

Read more at http://www.cringely.com/2009/04/sunset/

Regarding my previous post – I think that the acquisition is the start of a long death process for Java open source. I do not expect Oracle to announce the death of anything, but it will never the less die unless fully embraced by Oracle. The sun will surely set on Glassfish and the rest of the projects that doesn’t make any money for Sun, nor is of strategic interest to Oracle.

Oracle kills Open Source Java with a really big rock?

Being known as the guy that called Oracle evil in a blog post, I feel I gotta comment on today’s announcement that Oracle is buying Sun Microsystems for 7.4 billion US dollars. As you can imagine, I’m not very optimistic.

What does the deal really mean to the Open Source Java community? Isn’t this just business as usual? And wouldn’t we be worse off if Big Blue would have bought Sun a couple of weeks back?

As you might have guessed, I would have preferred IBM to buy Sun for many reasons. Perhaps the main thing is that I feel IBM has been embracing open source, whereas Oracle hasn’t. It makes all the difference. Let’s hope Oracle sees the light and doesn’t screw up everything Java!

The Sun assets at stake:
Development Tools: NetBeans
Middleware: OpenSSO, Glassfish, MySQL, Java Hotspot JVM, Java Real Time System
Consumer Technology: OpenOffice, JavaFX / JavaFX Mobile

I’ll try to describe what I think is a likely outcome of the assets above by comparing them to Oracle’s current product line and let’s see how bad this actually can get…

Sun NetBeans vs Oracle JDeveloper

This is easy – no one uses JDeveloper, and it would surprise me if Oracle didn’t bite the bullet and ditch JDeveloper for NetBeans which has become a really good (the best?) IDE recently.

Sun OpenSSO vs Oracle Access Manager

There will be no point for Oracle to invest any money in OpenSSO when they already have a good offering in their Fusion middleware suite. OpenSSO is toast.

Sun Glassfish vs Oracle Weblogic

Oracle has a stronger app server in Weblogic than Glassfish is. I think that Glassfish will be put to the axe. Quickly. Oracle is not known for giving away software and they will only open source software that is needing life support. Some recent examples include TopLink and ADF Faces.

Sun MySQL vs Oracle RDBMS

I think this is _the_ most obvious: MySQL is TheirSQL now and also R.I.P.

Sun Java Hotspot vs Oracle JRockit

Being the cynical person I am, I think Oracle can kill two birds with one stone here. One might think that Oracle would merge the two VM efforts into one, but the result might not be what you think. Let’s just assume that Oracle takes the good stuff from JRockit and puts it into the Hotspot JVM reference implementation. Is this a likely scenario? Hell no. Oracle is in the software business to make money, and if you want to run a production-grade Java Server VM, then you will have to get it from Oracle for a fee. By doing this they also effectively kill Terracotta, the only viable contender to Oracle Coherence. See, Terracotta will not run on JRockit… The consumer JVM will be named Sun JVM. Oracle will of course keep the Java Real Time VM as it’s profitable business.

OpenOffice, JavaFX / JavaFX Mobile

Oracle’s track-record in building consumers applications is, well, not great. Anyone that’s ever tried to install an Oracle product knows what I’m talking about. So I don’t think OpenOffice will survive either. On the other hand Larry may want to keep pushing it just to be a thorn in Microsoft’s side… As for JavaFX, it doesn’t really stand a chance versus Adobe – its too little, and too late. Oracle knows this and will kill it. Quietly.

So, is this good for anyone at all? Yes. Oracle and Microsoft. Everyone else loses. IBM is in a really awkward situation and JavaOne this year may be the ultimate funeral service for (free) open source Java.

Keep in mind that when Oracle says open they generally mean open standards whereas when IBM and Sun says open they generally mean free open source software.

I’ll close with a few Larry Ellison quotes from the conference call:

“Java is the foundation of the Oracle Fusion middleware and its the second most important software asset we’ve ever acquired.”

“We acquired BEA because they had the leading Java virtual machine”