A few years ago I was asked by one of our customers to help them make better use of their integration layer. Ever since then me and my team have been working on a framework in support of that. This is the fourth in a series of blogs on the development of our framework, and discusses the features it provides. The one that was announced last time, about building blocks, is momentarily postponed.
So far I’ve discussed the goals & challenges surrounding the development activities, but I’d like to focus more on the framework itself now, and what it brings to those that are using it.
As soon as a new party (be it service consumer or service provider) connects to our framework, it can profit directly from the wealth of functionality we deliver out-of-the-box. These ‘generic features’ are exactly what one would expect from a (logical) ESB, and are partly based on the Expanded Enterprise Service Bus Pattern.
As our project was scrum driven, features were developed only when they were needed. Sometimes, during the design & build phase, we discovered a better way of doing things, and sometimes the problem a feature was supposed to address was solved in a completely different way, outside of our scope. But in the end we managed to implement about 20 features, which can roughly be divided into five types: routing, robustness, security, transformation and data storage.
One of our main objectives is to get messages from A to B without A having to know where B is currently residing. To do this, we make extensive use of the WS-Addressing standard. One of the components in our framework, the routing service, uses the information in the message headers to decide what the next hop will be (hop in this case being another framework component). Most of the times a message is delivered to the backside as soon as it enters the integration layer, something we call simple routing.
However, as soon as some special activity needs to be performed (like data model transformation for example), the message is detoured to one of the framework components not connected to the outside world. We classify those as intermediate services, and they are agnostic of nature – which means they have no clue about the context in which they are called. The necessary information for the message to continue its path is part of the addressing headers as well, making this advanced routing.
A special kind of advanced routing is distribution, which makes it possible to send one message to several subscribers at the same time, using WS-Notification in its most basic form. Finally prioritized routing is a feature which makes sure that a message gets ahead of the rest so to speak – very useful when dealing with a customer waiting for service at a counter while there’s also a bulk load being processed.
Of course it’s of eminent importance that nothing goes wrong when delivering the message, or that when it does we can at least deal with the situation (exception management). First thing that happens when we receive a message is that we check to see if it complies with the industry & design standards we enforce (technical validation). Sometimes the consumer/publisher doesn’t want to wait for the (functional) answer but still wants to be informed if his message was technically valid, in which case we send him a response stating just that (technical acceptance).
Two of our features deal with peak load: throttling makes sure the integration layer only takes in what it can handle, while buffering guarantees we don’t overwhelm the applications we connect to. Similar to the second one is postponed delivery which is used when we know beforehand the backside is not available.
But by far the most important of these types of feature is guaranteed delivery. We played around with a bidirectional variant (using WS-RM) but finally settled on an unidirectional implementation, meaning that before we send the message we first store it, and if we receive an HTTP error code we send it again.
Last (and actually least, as it’s hardly used) it’s also possible to have syntactic validation of outgoing messages. But as we like to follow Postel’s law (also known as the robustness principle) we feel it’s the consumer’s responsibility to make sure the message was valid to begin with (you can imagine this took some selling from our part). The only exception is when the payload is altered by the framework.
Every application that wants to connect to the integration layer needs to make itself known using the WS-Security UsernameToken (authentication). For most of the services we expose that’s enough, in a few rare cases such an application has to have explicit permission (authorization).
Not used internally (only when we receive requests from certain external customers) is integrity, where we demand that certain parts of the message headers and payload are signed.
Given the fact that not all applications connected to the framework ‘speak the same language’ (as mentioned in the previous blog), there’s an evident need for data model transformation – one of our most used features. A lot less popular is the split feature, which makes it possible to divide a big message into smaller parts.
The last few features play a more supportive role to the ones already mentioned. We provide logging to be used during testing & bug-finding sessions and persistence when there’s a need to store the complete message. The latter is frequently used in combination with resending (which is necessary for the guaranteed delivery feature described above), but also in case of auditing requirements.
Most of these features have been around since the first version of our framework, and have proven their generic qualities over time. In a few cases we had to make some alterations and even now there are one or two features which we might implement differently in the future. There’s also a list of additional features but it’s rather short, which I take as a sign that what we have here is pretty complete.
That’s number four; next time I’ll talk a bit about the more specific deliverables we provide our project teams.