Friday, May 2, 2008

Performance Engineering

A good article from Alok Mahajan & Nikhil Sharma from Infosys ! Am quite stunned to be frank.

Tuesday, March 18, 2008

on colocation

One of the interesting things I found was the impact of colocation on the productivity of teams.

Consider a team of six people working on a module. Try this out .. place them at random seats within an office floor out of line of sight from each other. What you will find is that they may as well be in different countries.

Now place them in a six person pod or cubicles adjescent to each other. You will be amazed at the difference this makes in their productivity.

Back in 96, I was amazed one day to find one team started the habit of standing up on their desks (across six partitioned cubicles) for a daily quick discussion / debrief. It was quite funny as this was before the days of 'agile' and 'standups' etc. They found it quite effective to just stand up on their desks and confer when they had a quick team decision to make.

Just something that make me smile back then.

Sunday, March 9, 2008

running systems integration delivery hubs

A large SI (systems integration) project typically involves the requirements, design, development, integration, testing and deployment of an integrated enterprise software release. This typically involves the work of a large number of people spread across numerous teams, typically, across the organization and often across organizational boundaries.

Ideally, there are only two roles in this setup. A business role and a technical role. The business role represents the 'user' or the 'customer'. They lead on requirements specification & testing. The technical role lead on design, development, integration and deployment. Of course, each activity involves folks from both roles, however, it is important to keep perspective of who 'leads'.

The main source of inefficiency within any structure is communication. It is the hardest problem to solve. That is why people with multi-dimensional skills are so much more effective than people with a single dimension or specialized roles.

Having separate teams perform each specialized function (requirements, design, development, integration, testing, deployment etc) causes havoc as imagine the inefficiency and discontinuity created by the hand-offs. Where possible, such optimizations must be made, however, some specialization is inevitable when you scale up.

The establishment of the critical roles (leadership) and the required infrastructure is key to success.

Delivery Hub must-haves.
1> co-location facility with open floor plan, plenty of whiteboards, and 2-3 breakout rooms.

2> strongest presence from the e2e testing team (ideally also the 'users').

3> e2e solution design team (also ideally the integration leads).

4> representation from each of the component teams (project manager + 2-3 key developers)

5> e2e test environments - ideally 3. One being production mirror, another one for development-integration and the last being integration-release. The config and release management for the production mirror and integration-release environment must be formally managed to ensure integrity and optimal uptime that allows testing productivity. Expect the dev-int environment to be the most dynamic of the three with daily drops from component teams.

6> Core delivery hub roles :
a> overall program lead with program management support
b> overall architect / design lead (ideally also the integration lead)
c> overall business lead (requirements, business process etc.)
d> testing lead (ideally also the requirements lead)
e> test/config management lead (manages change control and integrity of the test environments, also ideally leads the software deployment to production)
f> deployment lead (focussed on pulling together all aspects of the deployment - training, data grooming, documentation, systems conversion & cut-over etc.)
g> e2e support lead (service management including e2e monitoring)

7> Daily program stand-up. I always did a 6 pm (1 hr) stand-up every day that helped bring focus to our execution.

In the project, there are three key phases that you will encounter. Am assuming that some form of agile or spiral methodology is used, so, these phases while unique, may not represent a distinct milestone. You will know when you are in each phase and effort must be made to prioritize activities that allow you to move to the next phase.

Phase I: This is where the requirements and design are still under a fair amount of churn and incomplete.
Phase II: The scope is now locked down, the designers have more or less finished their stuff with the developers now with plenty to do and on the critical path.
Phase III: Testing / integration is critical path with the developers and designers fixing defects.

The diagram above gives you a sense of the dynamics and activities in the project. A couple of core principles to highlight.
a> In phase I, it is the designers that are under stress. All help to make them productive. What this really means is that component development teams help with the e2e solution design and not wait to be spoon fed a document. The burden is really more the feeling of being the bottleneck than the technical challenge. This also solves another standard pitfall in that it improves the efficiency of knowledge transfer from design to development.
b> Tight requirement documentation and change control from the start is a must.
c> Test case development starts with the requirements .. ideally, use cases are translated to test scenarios from the start. Data setup demands for the test environments are looked at from the start.
d> In phase II, force early drops from the development teams. Any and all early feedback from the 'users' and 'testers' helps derisk the project. Early integration = SUCCESS ! Get into the specify/design-code-integrate-test spiral as quickly as possible. In this phase, give as much flexibility to the development teams. Anticipate frustration from the testers due to quality and downtime of test environments. Do not underestimate the value of this although .. this is crucial that they stay engaged and provide any and all early feedback. The only thing to not compromise with the development teams is releasing into test. In this phase, testers are usually buddies with the developers.
e> This is where testers become antagonistic towards the developers. Here the balance shifts to testing productivity. You must favor discipline in release management for test environments over the development team's inclination to fix (aka change) things constantly.

On the e2e project plan :
The critical artifact for me was what I called the 'component integration matrix'. It was a simple spreadsheet that had a breakdown of functionality of the enterprise release on one axis and the components on the other. The solution design outlines which components are involved for each functional grouping. The program managers working with the component delivery managers would work out the release dates and software versions for their components ready for integration. The easy optimization that this allowed was to align/prioritize the work for each component so that e2e functional threads would be completed as a priority thereby allowing the e2e test teams to get going. Of course, in parallel to the design, the test teams aligned their e2e test cases to each of these horizontals and reported test coverage along these lines. That way, I could in a single spreadsheet see the convergence and critical path of the program from a software development and integration perspective.

This was always more useful than the prettiest gantt charts. It is only finalizing this that gave me any sense of a target date, so, completing this was always a priority.

NB> This is by no means expected to represent a comprehensive view of the subtleties or complexities within a delivery hub ... just some food for thought. This represents 15 years of experience brain dumped within 30 mins so take it for value added.