Sunday, March 9, 2008

running systems integration delivery hubs

A large SI (systems integration) project typically involves the requirements, design, development, integration, testing and deployment of an integrated enterprise software release. This typically involves the work of a large number of people spread across numerous teams, typically, across the organization and often across organizational boundaries.

Ideally, there are only two roles in this setup. A business role and a technical role. The business role represents the 'user' or the 'customer'. They lead on requirements specification & testing. The technical role lead on design, development, integration and deployment. Of course, each activity involves folks from both roles, however, it is important to keep perspective of who 'leads'.

The main source of inefficiency within any structure is communication. It is the hardest problem to solve. That is why people with multi-dimensional skills are so much more effective than people with a single dimension or specialized roles.

Having separate teams perform each specialized function (requirements, design, development, integration, testing, deployment etc) causes havoc as imagine the inefficiency and discontinuity created by the hand-offs. Where possible, such optimizations must be made, however, some specialization is inevitable when you scale up.

The establishment of the critical roles (leadership) and the required infrastructure is key to success.

Delivery Hub must-haves.
1> co-location facility with open floor plan, plenty of whiteboards, and 2-3 breakout rooms.

2> strongest presence from the e2e testing team (ideally also the 'users').

3> e2e solution design team (also ideally the integration leads).

4> representation from each of the component teams (project manager + 2-3 key developers)

5> e2e test environments - ideally 3. One being production mirror, another one for development-integration and the last being integration-release. The config and release management for the production mirror and integration-release environment must be formally managed to ensure integrity and optimal uptime that allows testing productivity. Expect the dev-int environment to be the most dynamic of the three with daily drops from component teams.

6> Core delivery hub roles :
a> overall program lead with program management support
b> overall architect / design lead (ideally also the integration lead)
c> overall business lead (requirements, business process etc.)
d> testing lead (ideally also the requirements lead)
e> test/config management lead (manages change control and integrity of the test environments, also ideally leads the software deployment to production)
f> deployment lead (focussed on pulling together all aspects of the deployment - training, data grooming, documentation, systems conversion & cut-over etc.)
g> e2e support lead (service management including e2e monitoring)

7> Daily program stand-up. I always did a 6 pm (1 hr) stand-up every day that helped bring focus to our execution.

In the project, there are three key phases that you will encounter. Am assuming that some form of agile or spiral methodology is used, so, these phases while unique, may not represent a distinct milestone. You will know when you are in each phase and effort must be made to prioritize activities that allow you to move to the next phase.

Phase I: This is where the requirements and design are still under a fair amount of churn and incomplete.
Phase II: The scope is now locked down, the designers have more or less finished their stuff with the developers now with plenty to do and on the critical path.
Phase III: Testing / integration is critical path with the developers and designers fixing defects.



The diagram above gives you a sense of the dynamics and activities in the project. A couple of core principles to highlight.
a> In phase I, it is the designers that are under stress. All help to make them productive. What this really means is that component development teams help with the e2e solution design and not wait to be spoon fed a document. The burden is really more the feeling of being the bottleneck than the technical challenge. This also solves another standard pitfall in that it improves the efficiency of knowledge transfer from design to development.
b> Tight requirement documentation and change control from the start is a must.
c> Test case development starts with the requirements .. ideally, use cases are translated to test scenarios from the start. Data setup demands for the test environments are looked at from the start.
d> In phase II, force early drops from the development teams. Any and all early feedback from the 'users' and 'testers' helps derisk the project. Early integration = SUCCESS ! Get into the specify/design-code-integrate-test spiral as quickly as possible. In this phase, give as much flexibility to the development teams. Anticipate frustration from the testers due to quality and downtime of test environments. Do not underestimate the value of this although .. this is crucial that they stay engaged and provide any and all early feedback. The only thing to not compromise with the development teams is releasing into test. In this phase, testers are usually buddies with the developers.
e> This is where testers become antagonistic towards the developers. Here the balance shifts to testing productivity. You must favor discipline in release management for test environments over the development team's inclination to fix (aka change) things constantly.

On the e2e project plan :
The critical artifact for me was what I called the 'component integration matrix'. It was a simple spreadsheet that had a breakdown of functionality of the enterprise release on one axis and the components on the other. The solution design outlines which components are involved for each functional grouping. The program managers working with the component delivery managers would work out the release dates and software versions for their components ready for integration. The easy optimization that this allowed was to align/prioritize the work for each component so that e2e functional threads would be completed as a priority thereby allowing the e2e test teams to get going. Of course, in parallel to the design, the test teams aligned their e2e test cases to each of these horizontals and reported test coverage along these lines. That way, I could in a single spreadsheet see the convergence and critical path of the program from a software development and integration perspective.



This was always more useful than the prettiest gantt charts. It is only finalizing this that gave me any sense of a target date, so, completing this was always a priority.

-------------------------------
NB> This is by no means expected to represent a comprehensive view of the subtleties or complexities within a delivery hub ... just some food for thought. This represents 15 years of experience brain dumped within 30 mins so take it for value added.

2 comments:

Anonymous said...

It indeed is a good food for thought. These thoughts could be very well harmonized in an existing software process but in doing so for the first time would unearth some pragmatic challenges.

One most important aspect that you may want to look at the Change Control Board's and Program office duties. If these two things are properly set up then managing big projects becomes fairly easy. You have covered these in a different way but I would stress on giving these teams more importance.

The CCB should only be for the 3rd and the most critical environment .i.e. Production. They should reserve the rights to change anything on the system. I am sure you too would have faced issues of having 3 different versions of the same software on 3 enviornments and more often than not finding that the version deployed on production is quite different from the one in the Change control system (VSS, StarTeam). You can do anything at that point than to spend money and reconcile. This again does not prove useful.

I agree with you that early integration means success and that’s a very nice way of looking at big projects. Especially, the new systems integrating with legacy systems or systems using older or different (disparate) methods of integration.

One pence from my side. I always had a deployment matrix and drove my production enviornment backwards. i.e. What is on the production enviornment and what is needed. This has all the software, hardware and the integration details. It is not a big document ... just 7-8 pages but can give you all the details at a glance. This suggestion may seem out of place with your current blog but I feel its important to share it with you now.

Cheers and thank you once again for a great treat.

Milan Gupta said...

Understood and accepted. Funny thing is that I am actually a fairly process/structured guy when it comes to execution. Not so much when it comes to problem solving.

I was always one of those that designed on the fly, but then again, it matched my style of refactoring my code several times before release.

The trouble I have with CCBs is that they tend to become bureaucracies. If the right people are running it, they are very effective .. they have to be able to truly understand the impact / significance of a change when they approve / disapprove. That is the key aspect to maintaining agility within the team. It is absolutely a critical function within the delivery hub.