Agile Test Strategy and Structure

Agile and Lean thinking changed the way we develop software. That’s for sure. But I think it didn’t change the way we implement software so much that it changed the world around it. For example testing.

There’s a lot of good blogs about Agile testing but the article in Wikipedia at the moment is in my opinion too short to provide encyclopedic coverage of a subject. I think it reflects how straight-forward it is to discuss about how to introduce Agile values and principles to testing. But how hard it is to find practical commonalities that would fit all. This is yet another experience sharing about test strategy and how to organize testing.

The Best Test Strategies Emerge from Self-Organizing Teams

Agile principles state that the best architectures, requirements, and designs emerge from self-organizing teams. I would like to expand this to Test Strategy.

In the waterfall world, I always questioned people that claim that you should “test as early as possible”. In waterfall, as early means as low level tests as possible. When trying to cover all possible scenarios that would happen on integrated SW and HW e.g. in high load situations the low level test suites become large and complex and hard to maintain.

Old-school Test Strategy defines in which test phase different types of tests should be done, what tools should be used etc. There is exit and entry criterion that harden the waterfall thinking. Test Strategy is often made by formal test leader that tries to optimize the test-chain up-front. It leads to over-simplification that is hard to take into practice when the features and functionalities change.

The Best Test Strategies Emerge from Cross-Functional Self-Organizing Teams. Having all** test activities done in cross-functional teams, they have the authority to define the test strategy: They have the possibility to choose in which test environment each test activity is executed. There is no exit/entry criterion since there is no handovers. Testing as early as possible becomes meaningless. Testing as efficiently as possible takes over.

Do we have a test strategy that is different for each team? No. Self-organizing teams will sub-optimize, but sub-optimization leads to the famous “optimizing the whole” when the teams collaborate. The best architectures and the best test strategies will emerge only if the teams collaborate.

**The Separate Test Team Problem

Should we have all test activities in the cross-functional team? Delivery of working software frequently in Agile principles translates to potentially shippable software after each sprint in Scrum. In the by-the-book Scrum this means that we should have all test activities in Scrum teams. I think in the area of embedded software we have to make compromises. I would not put an Airbus test pilot into team developing cockpit software. And I would not run the test flight after each iteration in each team.

In our context, having all test activities in a Scrum team is in conflict with the basic definition of a team. We have test activities that ensure the non-functional properties and characteristics of the whole product. These cannot be connected directly to any feature or any user story. These tests are sometimes needed even without any implementation changes. We don’t have the interdependency between team members here. We don’t have the synergies nor shared goals.

There is also other reasons to exclude some test activities from Scrum teams e.g. specialized competence, expensive test environment etc. But I think these can be fixed. The team aspect we cant.

At the end, we have to make sure that the Test Teams are following scope-wise the Sprint pulse and re-basing the software from the Continuous Integration flow. We should avoid mini-waterfall. Test Teams should be just as any team but with their own specialization. Together with the Scrum teams, the Test Teams should be responsible of having potentially shippable product after each sprint.


This entry was posted in AgileTesting, All, Framework and tagged . Bookmark the permalink.

4 Responses to Agile Test Strategy and Structure

  1. Hi Janne,

    I don’t quite agree, not having cross-cutting concerns such as non-functional-aspects included within a team, because it might not relate directly to features or finally business value. That would be the same for any other cross-cuts such as architecture & design, implementation of logging etc. Anyway, I guess that’s the implementation part of the stories on task level.

    Even more there might be sprints, rather focusing on sprint goals such as improving overall performance and this way even supporting business value, if capability of high load is is a main objective of the product.

    On the other hand there are situations, where organizations tend to outsource testing efforts nowadays especially in the automation area. Questionable if this is beneficial at all or if it’s wise to separate teams and their activities fully in this area – especially when talking about *quality*.

    However in the offshoring scenario I could imagine two distinct approaches: 1) going for distributed or dispersed teams 2) going for a dedicated test improvement team as I would also utilize an integration team, if it was about a more complex project implemented by means of scrum of scrums.

    best, Michael

    • Janne Irmola says:

      Thanks Michael for you comment!

      I totally agree with you with the off-shoring of the test activities. It should not happen. When distributing the work to multiple sites the whole SW workflow should exist under the same roof.

      Maybe we are working with different kind of products since for us the non-functional aspects relate directly to business value. In fact they might have the highest value.

      I’ll still try to justify my opinion on separate test teams with one example from embedded world. Let’s develop Traction Control for Audi. We have teams developing the controlling software and teams working on the chassis etc. We have non-functional requirement for best-in-class cornering capability. Would you put the ex-rally driver to the SW team or not? Is it enough if the team test the functional requirements (how the SW should react to different inputs from sensors etc) and separate team of rally drivers are giving feedback during the sprints for our non-functional requirement?

  2. Anonymous says:

    Hi Janne, (Sorry for my very late response, not having time earlier.)

    I must say that you hit the nail on the head when saying “… how hard it is to find practical commonalities that would fit all”. That is a very well known dilemma in testing. Testing is always context specific and therefore it is very hard to find commonalities especially when it comes to practicalities. “One size doesn’t fit all.”

    As an old-school tester I must comment on your interpretation on old-school test strategy “test as early as possible”. In waterfall it was the way to get the feedback loop shorter, to save costs in overall testing, trouble shooting, fault correction etc.. Test early as possible was a leading star, not saying that you should do it, if it was not feasible, or it was more expensive, or even inefficient. It wasn’t unconditional – do it without thinking rule. You can compare it to today’s test automation strategy saying “you should automate all your tests”. It is a good leading star, but should not be followed blindly. It means you should automate all your tests when feasible and cost efficient.

    When talking about test strategy in Agile. I think the main principle hasn’t changed that much compared to the ‘old-school test strategy’. Still the aim is to get feedback loop as short as possible. The difference is that the work is not done in phases anymore, so a shorter feedback loop does not mean “test early as possible” anymore. There is no time aspect.

    I agree what you say about the test strategies in cross functional teams except that on detailed level each team should have their own specific test strategy per each feature they are developing. That is because testing is always context specific. In which environment tests are executed, what tests are implemented/run should be chosen by the teams based on the efficiency (as you said), but I believe these decisions are very much dependent on the feature they are developing, so on detailed level teams will have ‘different strategies’.

    Keeping “the whole” in mind the strategies must be known by “separate test teams” and also aligned with their test strategy. Otherwise there is a risk to overdo or omit (especially when it comes to non-functional requirements) some tests.


  3. Ismo Paukamainen says:

    Anonymous in the last reply was Ismo Paukamainen

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s