Future of deployment: Part 1 – Monuments vs Cheap housing

21 Dec, 2009
Xebia Background Header Wave

I’m going to start a series on the future of deployment. How and what do we deploy in, say 5 years or so. Of-course this is my opinion and please add your own ideas in the comments below.
To start this series off i’m going to talk about the current state of things, or at least what i see at a lot of enterprise customers. Most of the enterprises i’ve been at have physical servers which are used by numerous applications from different development teams. Some of these servers are old and have been in maintenance by operations for years (+4 years ;)). That means that the server has changed, lots of deltas, aka, patches, deployments etc. have been applied and as my colleague Vincent has stated applying deltas has its cons 😉 Of-course i’m talking about servers and not applications and the same rules do not apply, or do they?

Deltas on servers are bad, period.
I think the same rules do apply. Applying deltas might be faster but in the end it will become increasingly harder to map out the path you have taken from 4 years ago up till now! and this is oh-so obvious on the servers themselves. Try and rebuild a 4 year old server where every week at least 5 deployments have been executed, every month a patch or two to the OS-Middleware has been applied and every six months some change to the filesystem has taken place. It is just plain hard.
And here is the prime example
A couple of years ago i witnessed a project that was trying to move their entire server and application environment from one location to another and in the meantime trying to get rid of some out-of-date-standards which were lingering on those servers. They had automated deployment scripts for all their applications, so the only thing they needed to do is make sure they had a clean environment in the new server location where they could install the latest and greatest version of their applications. They tried for 6 months to get it working but failed because they could not properly reproduce the servers at the remote location, so much old-out-of-date-stuff on those servers was needed by applications! So finally in the end they gave up and moved all their servers by restoring a server-backup on the remote site. The lesson this company learned was to spread the amount of applications onto different servers. This allowed them to keep their servers and applications more up-to-date and get rid of out-of-date-standards more visibly.
Introducing the new is easy, getting rid of the old, just let it be…
The company created new servers which were going to be used by new applications. Therefore they could install them almost anyway they wanted to. Those new application deployments could then use the new features of those servers and almost everything was good. Whenever an old application wanted to make use of some of the new functionality only available on new servers they had to adjust their deployment and sometimes code. It was accepted that when using new functionality, you move to a new server with updated JDK, log file paths, more memory, new version of the application server or a portal and so on. Applications had a natural upgrade path, old applications are running on old servers but those servers do not require much maintenance except for the odd patch and clean log files. New applications are running on new servers with better middleware, tools, etc. making maintenance life somewhat easier on a different level.
Different levels of maintenance – Monuments vs. Cheap housing
How can more servers result in lower maintenance, isn’t that just weird? Yes it is! But the difference is this; If i have one server for all my applications it becomes hard to make changes to the server and those applications. Just like 40-people-(applications) living in a monumental-building-(server). For every change you have to figure out what the impact is on the building itself and the people living there. In my own experience, every time i wanted to make some change to the server i had to go through a committee to get approval! The committee consisted of not only hardware/OS/Middleware people but also all the applications people! All 40 of them ;( You might feel my frustration as i requested my third change for that particular server in the year. When we moved to smaller-servers-(cheap housing) with less applications-(small families) it got a whole lot easier to make changes 😉 Ok, ok, so the amount of maintenance wasn’t the problem, getting consent from 50 people to change and then finding out if it worked in the monument was the problem. Whereas changing something and/or building a brand new cheap house felt like a breeze!
So what about the future then?
In my next post i’ll explore what is in my opinion the next big thing after the “Monument” and “Cheap housing”. Of-course it has something to do with cloud/virtualization technologies. It will be all about moving appliances! and it is something that Deployit will provide support for.

Robert van Loghem
I'm always interested in the latest and greatest when it comes to; communication, infrastructure, user experience and coming up with some crazy creative solution which might seem as a weird combination ;) I use and spread the word about multimedia (podcasts, vodcasts, movies, comics) to effectively communicate concepts, ideas, documentation, past experiences and so on. Furthermore i am heavy into infrastructure but then the middleware part, like HTTP servers, Application Servers, Messaging, Virtualization, etc... I get really enthousiastic if the infrastructure is clustered, highly available and is critical to doing business! I also like to do development and thus "i eat my own dogfood".

Get in touch with us to learn more about the subject and related solutions

Explore related posts