Technology Doesn’t Kill Projects, People Do

Technology Doesn't Kill Projects, People Do

Having recently read a Forrester whitepaper regarding integration trends, I am inclined to agree with their findings. Most of the larger companies I work with have multiple “middleware” tools from multiple vendors, all solving problems in their own way.

Generally integration technology is the plumbing of the business, it’s not something that often gets changed, it is put in to solve a specific need and even the most strategic platforms often end up having to talk to different vendors integration technology.

One of the most common things that we have to do with our Interlok Adapters is bridging between other integration vendors – I often joke that we are like the UN of integration vendors, bringing them all together to deliver some form of cohesion. The trouble is, integration technology has evolved at a slower pace than the application landscape.

If you look back in the late 90’s and early 2000’s it was all about EAI patterns. This was a great place to be, lots of connections to things but all integration processes controlled by a routing engine in the middle, pretty much every single EAI vendor of the time implemented this pattern with varying degrees of success. The trouble was, all of the tools were proprietary and woe betide you if you tried to get them to speak.

Enter SOA and the ESB from about 2001 onwards. This actually had a differing pattern, a bus pattern with everything connecting to a messaging layer and no central router (if you look at the purest ESB offerings at the time). The major stumbling block here was that vendors who had delivered EAI tools took out the lipstick and dressed up the pig. This led to a widespread confusion in the market place, some vendors pushing the ‘standard’ BPEL engine (hmmm central routing engine now where have I heard that before).

However, this promised to use standards, and did indeed introduce a great step forward, although BPEL never actually gave any one vendor independence, it did provide support for more interesting standards that are open and led to a rise in SOAP based Web Services, at least now I can get integration platforms to “talk” and WSDL does at least allow a contract to be defined.

The big problem here is which ESB controls the overall process? How do you stop the two camps in your business engaging in an expensive turf war? Things have moved on with even more options for integration and the advent of cloud platforms. The biggest change that is needed is the ability to federate execution of the process, but have central control and configuration management.

It shouldn’t matter whether you are calling a composite service made up of a chain of technical services bound together by one ESB or another. The problem here isn’t one of technology, it is about governance. How do I govern the contract of this integration? Who do I call when something goes wrong?

Let’s think of a real example, I want to call an API in a cloud service provider, let’s say a bulk query on Salesforce which could return more than 2000 rows. I now have to orchestrate a small process to handle the different parts of the request (I would execute a query and based on a return flag I would then execute a different type of query to bring back the rest of the data set, and loop around this until the flag says it’s the last block of data).

I then need to assemble this into a large file, perhaps transform it and send it somewhere else to update some other master data. At implementation time, I wrap the Salesforce logic in an ESB process, and I can now call this composite service as part of the longer running business process.

So I have now abstracted a series of technical services from a cloud provider into my own API, this can be called from some other ESB process sitting somewhere else in the business (even on a different technology). What happens now if the data gets so large that it blows the memory in the ESB at runtime? Or if it takes so long that the web service call from the other platform times out, and then worst case keeps retrying?

The key thing for any business who has multiple integration technologies in house to implement is a centre of excellence to govern the overall picture. This team need to be independent, and arbitrate between the different technologies at play and they also need to understand the key dependencies and versions of different services on each platform.

What if (going back to my example) the external provider change their API? Now I need to raise a high priority support ticket to get it fixed. The key performance indicator here is the time to adapt to that change. As an aside a few years ago I worked with a major telco (who shall remain nameless) who governed their entire SOA with a team of four people and an excel spreadsheet. It amused me at the time, and still does, but it also goes to demonstrate that the key thing is people and communication.

Jeff Bradshaw

Jeff Bradshaw is chief technology officer at Adaptris. Jeff is responsible for the overall technical leadership of the company. He sets the development roadmap, and works with the Chief Architect to deliver innovative software that the "field" organisation can take to market. Jeff also works with the operations team to ensure that the solutions deployed within the company's four data centres are always available. In addition to this, he liaises with customers, to ensure that solutions exceed their requirements, and grow with their business. Jeff has spent his working life in the B2B arena. Prior to forming Adaptris, Jeff worked with EDI and EAI industry leaders Perwill and Frontec (Axway) and end user organisations DHL, and Equitas. Jeff has been engaged as a consultant with many large partners including Progress and customers including British Airways, Scottish Widows, British Telecommunications, and Carrefour.

  • Todd Enneking, Cleo

    Great article. Your Salesforce/ESB example is one of the many cases showing where MFT needs to be an integral part of any company’s integration strategy. ESB/SOA ismessage centric and generally used inside the firewall. MFT is file centric and provides the backbone for Big Data and large volumes of data as well as security and governance for all file movement, whether A2A, B2B or P2P.