Customize off the shelf, be warned

19 Aug, 2011
Xebia Background Header Wave

A while ago I realized that the C in COTS stands for Customize, so in reality it is Customize Off The Shelf (and not Commercial Off The Shelf). The premise of COTS products is that it reduces system development costs and long term operational maintenance costs. Sounds like music to management and procurement departments. Reality can be different. Realizing that the C stands for Customize highlights one of the pitfalls most people are aware of: the amount of customization needed to make a COTS product fit in an organization can be huge. But there are more pitfalls and in this blog I’ll highlight a few.

First pitfall is that many procurement departments and management still have the mindset that everything is clear at the start of a project, when there is a rough idea what problem must be solved and there is relative confidence that a positive business case can be made. To get a better picture of the costs service integrators are contacted such that the business case can be made. A preference for a COTS based solution is expressed and from there on the discussions focus on the features of the COTS product, license costs and hourly rates for professional services. Little attention is given to deeper analyze the problem to be solved, get core requirements clear and see if a COTS product is the appropriate solution. For example a real time ETL product may support Change Data Capture, but using it would create a really tight coupling to the internal data model of the source system and requires the ETL product to understand how to interpret that data model. Although technically possible, it is definitely not preferred from an architectural perspective.
Selecting a COTS product is a decision that cannot be taken lightly, it frames the architecture and design space and comes with a set of consequences. This decision is hard to revert, so to make this decision you should understand the problem you trying to solve, the constraints that apply, the pro’s and con’s of the product etc. Proof of concepts provide valuable information and it forces you to really think about the problem you’re solving. Domain and technical experts should be able to decide what the relevant requirements (both functional and non-functional) to be included in the proof of concept are. Don’t rely on experts of the service integrator or COTS vendor alone, ensure you have some neutral experts involved. Should you pay for execution of such a proof of concept? Definitely not for licenses involved, for hours spent you could. Costs of a proof of concept should be a fraction of the total costs and you may be saving yourself an ton of money by doing a serious proof of concept and learning from it. Remember, you get what you pay for.
When the decision for a specific COTS product is made next pitfalls come into play. Although the product is envisioned to work off the shelf, there is always that tiny bit of configuration to be done. More often than not this turns out to be a significant effort. Now the team starts to really dig into the requirements and understand them. Some can be realized easily with the COTS products, some not and that’s where trouble starts. If you feel like your pushing a square through a round hole, you know you got the wrong tool for the problem at hand. Deciding to no use the COTS product (for everything) often is a politically sensitive subject. Therefore, be open about this and always keep the option open to use other tools to if the COTS product becomes an impediment. The goal is not to use the COTS product XYZ, the goal is to solve the business problem at hand.
The next pitfall concerns the use of development practices. If significant configuration effort and/or custom coding is required it becomes a development project instead of just deploying a COTS product. If it is a development project, it should be treated like that and established practices like version and release management, continuous integration, automated testing, automated deployment, etc. must be applied. These practices build quality into the process and enable teams to identify and fix issues early instead of after go-live. Are these practices ignored? Yes, at least I’ve seen it happen. The question is “why?”.
I suspect that one cause is mix of skills in the team, or actually, the lack thereof. When running a project based on a COTS product the natural tendency is to search for team members that are trained for that product. Although they know the COTS product inside out, they may not have a software engineering background and/or are not hooked into the software engineering community. They are simply not aware of these practices. And if they are aware of the existence of these practices, they may not know how to execute them. For these reasons it’s important to have a mixed skills set in the team. Sure you need COTS product experts, but you also need all-round software engineers, a DBA, operating system experts, etc. They might not be full time team members during the whole project, but they must be available at short notice and stay informed about the current state of the project.
Another reason why these practices are not applied may be limitations of the COTS product. If the COTS product doesn’t support version management by itself and integration with a well known version management system like Subversion is not possible, then you’ll have to work out an alternative. This may involve some manual exporting etc, but the fact that it’s not supported out of the box, does not imply that these practices can be skipped without negative consequences.
A third reason to have a mixed team is to prevent the golden hammer syndrome. If the team consists only of COTS product experts, they’ll try to solve every problem with the COTS product. Even when a bit of scripting or the use of standard command line tools would do the trick much faster and with less effort. Generation of test data is an example, typically a bit of scripting is enough to generate test data. However, I’ve seen situations in which complete ETL workflows in the COTS product were developed to generate test data. That took significant effort and the speed at which the data was generated was low (which became a problem when the database had to be populated with larger amounts of data). An all-round software engineer must be able to generate that test data in a much more efficient way.
These are some pitfalls and suggestions how to deal with them that I cam across. They can be prevented by having the right mix of skills involved and by focussing on the problem to be solved instead of focussing on COTS product features. What pitfalls have you seen? How should we prevent them from happening?


Get in touch with us to learn more about the subject and related solutions

Explore related posts