50% discount coupon

International Agile Tester Foundation

Steen Lerche-Jensen

2.2 Status of Testing in Agile Projects

In Agile projects, changes take place often. When many changes happen, we can expect that test status, test progress, and product quality constantly evolve, and testers need to find ways to get that information to the team so that they can make decisions to stay on track for successful completion of each iteration.

Figure: Change Happens... Are you ready?

Change can affect existing features from previous iterations. Therefore, manual and automated tests must be updated to deal effectively with regression risk.
Brief summary:

  • In an Agile project, change happens, rapidly and often
  • When the features change, so does product quality
  • Change can make a mess of your status reporting processes if you’re not careful
  • Change also means that accurate test status is critical for smart team decisions
  • Change can have an impact on features from previous iterations, meaning regression testing is needed.
  • Change often means existing tests must change

Let us look at some of the differences:

2.2.1 Communicating Test Status, Progress, and Product Quality

Agile Teams progress by having working software at the end of each iteration. To determine when the team will have working software, the teams can monitor progress in different ways:

  • Test progress can be recorded using automated test results, Agile task boards, and burndown charts
  • Test status can be communicated via wikis, standard test management tools, and during stand-ups
  • Project, product, and process metrics can be gathered (e.g., customer satisfaction, test pass/ fail, defects found/ fixed, test basis coverage, etc.)
    • Metrics should be relevant and helpful
    • Metrics should never be misused
  • Automating the gathering and reporting of status and metrics allows testers to focus on testing

<

Example of Kanban task board. For Key, see earlier.

It is common practice that the Sprint Backlog is represented on a Scrum board or task board, which provides a constantly visible depiction of the status of the User Stories in the backlog. Also included in the Sprint Backlog are any risks associated with the various tasks. Any mitigating activities to address the identified risks would also be included as tasks in the Sprint Backlog.
The story cards, development tasks, test tasks, and other tasks created during iteration planning are captured on the task board, often using color-coordinated cards to determine the task type. During the iteration, progress is managed via the movement of these tasks across the task board into columns such as to do, work in progress, verify, and done. Agile Teams may use tools to maintain their story cards and Agile task boards, which can automate dashboards and status updates.
A bit more on some of the topics:

Burndown charts

Burndown charts can be used to track progress across the entire release and within each iteration. A burndown chart represents the amount of work left to be done against time allocated to the release or iteration.
The Sprint Burndown Chart is a graph that depicts the amount of work remaining in the ongoing Sprint. A planned Burndown accompanies the initial Sprint Burndown Chart. The Sprint Burndown Chart should be updated at the end of each day as work is completed. This chart shows the progress that has been made by the Scrum Team and allows for the detection of estimates that may have been incorrect. If the Sprint Burndown Chart shows that the Scrum Team is not on track to finish the tasks in the Sprint on time, the Scrum Master should identify any obstacles or impediments to successful completion and try to remove them
A related chart is a Sprint Burnup Chart. Unlike the Sprint Burndown Chart that shows the amount of work remaining, the Sprint Burnup Chart depicts the work completed as part of the Sprint.

Example of Sprint Burndown chart

The initial Sprint Backlog defines the start-point for the remaining efforts. The remaining effort for all activities is collected on a daily basis and added to the graph. In the beginning, the performance is often not as good as predicted by the ideal burndown rate due to wrong estimations or impediments that have to be removed in order to get up to full speed.
To provide an instant, detailed visual representation of the whole team’s current status, including the status of testing, teams may use Agile task boards.

Acceptance criteria

Testing tasks on the task board relates to the acceptance criteria defined for the User Stories. As test automation scripts, manual tests, and exploratory tests for a test task achieve a passing status, the task moves into the done column of the task board. The whole team reviews the status of the task board regularly, often during the daily stand-up meetings, to ensure tasks are moving across the board at an acceptable rate. If any tasks (including testing tasks) are not moving or are moving too slowly, the team reviews and addresses any issues that may be blocking the progress of those tasks.

Daily stand-up meeting

  • On Agile task boards, tasks move into columns (to do, work in progress, verify, and done)
  • Done means all tests for the task pass
  • Task board status reviewed during stand-ups, which include testers and developers (whole team)
  • Each attendee should address:
    • What have you completed since the last meeting?
    • What do you plan to complete by the next meeting?
    • What is getting in your way?
  • Team discusses any blockages or delays for any task, and works collaboratively to resolve them

Other ways to capture test and quality status

To secure overall product quality, Agile Teams my perform customer satisfaction surveys to receive feedback on whether the product meets customer expectations.
Agile Teams may also use other metrics similar to those captured in traditional development methodologies, such as;

  • Test pass/ fail rates
  • Defect discovery rates
  • Confirmation and regression test results
  • Defect density
  • Defects found and fixed
  • Requirements coverage
  • Risk coverage
  • Code coverage
  • Code churn to improve the product quality.

For any lifecycle methods, the metrics captured and reported should be relevant and aid decision-making. Metrics should not be used to reward, punish, or isolate any team members.

2.2.2 Managing Regression Risk with Evolving Manual and Automated Test Cases

In an Agile project, as each iteration completes, the product becomes more complex. Therefore, the scope of testing also increases.

Example Showing Business Requirements, Product Backlog, Sprints, Products and Incremental Delivery.

Regression Testing

Along with testing the code changes made in the current iteration, testers also need to verify no regression has been introduced on features that were developed and tested in previous iterations. The risk of introducing regression in Agile development is high due to extensive code churn (lines of code added, modified, or deleted from one version to another).

Automation of test

Since responding to change is a key Agile principle, changes can also be made to previously delivered features to meet business needs. In order to maintain velocity without incurring a large amount of technical debt, it is critical that teams invest in test automation at all test levels as early as possible.

  • Automated tests at all levels reduce technical debt and provide rapid feedback
  • Run automated regression tests in Continuous Integration and before putting a new build in test
    • Automated unit tests provide feedback on code and build quality (but not on product quality)
    • Automated acceptance tests provide feedback on product quality with respect to regression.

Figure: Continuous Integration. For Key, see earlier.

Continuous Integration

  • Fix regression and build bugs immediately
  • For efficiency, also automate test data generation and loading, build deployment, environment management, output comparison.

Use of test automation, at all test levels, allows Agile Teams to provide rapid feedback on product quality. Well-written automated tests provide a living document of system functionality [Crispin08]. By checking the automated tests and their corresponding test results into the configuration management system, aligned with the versioning of the product builds, Agile Teams can review the functionality tested and the test results for any given build at any given point in time.
Automated unit tests are run before source code is checked into the mainline of the configuration management system to ensure the code changes do not break the software build. To reduce build breaks, which can slow down the progress of the whole team, code should not be checked in unless all automated unit tests pass. Automated unit test results provide immediate feedback on code and build quality, but not on product quality.
Automated acceptance tests are run regularly as part of the Continuous Integration full system build. These tests are run against a complete system build at least daily, but are generally not run with each code check-in as they take longer to run than automated unit tests and could slow down code check- ins. The test results from automated acceptance tests provide feedback on product quality with respect to regression since the last build, but they do not provide status of overall product quality.
Automated tests can be run continuously against the system. An initial subset of automated tests to cover critical system functionality and integration points should be created immediately after a new build is deployed into the test environment. These tests are commonly known as build verification tests. Results from the build verification tests will provide instant feedback on the software after deployment, so teams don’t waste time testing an unstable build.
Automated tests contained in the regression test set are generally run as part of the daily main build in the Continuous Integration environment, and again when a new build is deployed into the test environment. As soon as an automated regression test fails, the team stops and investigates the reasons for the failing test. The test may have failed due to legitimate functional changes in the current iteration, in which case the test and/or User Story may need to be updated to reflect the new acceptance criteria. Alternatively, the test may need to be retired if another test has been built to cover the changes. However, if the test failed due to a defect, it is a good practice for the team to fix the defect prior to progressing with new features.
In addition to test automation, the following testing tasks can also be automated:

  • Test data generation
  • Loading test data into systems
  • Deployment of builds into the test environments
  • Restoration of a test environment (e.g., the database or website data files) to a baseline
  • Comparison of data outputs

Automation of these tasks reduces the overhead and allows the team to spend time developing and testing new features.
Doing test automation in a wrong way can be costly, so it is also important for the tester in the Agile Team to look at the whole test tool/ test automation process, as detailed in the Test Automation figure below.

Figure: Test Automation. For Key, see earlier.

Keep test updated

It is also critical that all test assets such as automated tests, manual test cases, test data, and other testing artifacts are kept up-to-date with each iteration. It is highly recommended that all test assets be maintained in a configuration management tool in order to;

  • enable version control,
  • ensure ease of access by all team members,
  • support making changes as required due to changing functionality while still preserving the historic information of the test assets.

Complete repetition of all tests is seldom possible, especially in tight-timeline Agile projects. Testers need to allocate time in each iteration to review manual and automated test cases from previous and current iterations to select test cases that may be candidates for the regression test suite, and to retire test cases that are no longer relevant. Tests written in earlier iterations to verify specific features may have little value in later iterations due to feature changes or new features which alter the way those earlier features behave.
While reviewing test cases, testers should consider suitability for automation. The team needs to automate as many tests as possible from previous and current iterations. This allows automated regression tests to reduce regression risk with less effort than manual regression testing would require. This reduced regression test effort frees the testers to more thoroughly test new features and functions in the current iteration.
It is critical that testers have the ability to quickly identify and update test cases from previous iterations and/or releases that are affected by the changes made in the current iteration. Defining how the team designs, writes, and stores test cases should occur during release planning. Good practices for test design and implementation need to be adopted early and applied consistently. The shorter timeframes for testing and the constant change in each iteration will increase the impact of poor test design and implementation practices.

Use the promo code: atfacademy10 and get 10% discount for the International Agile Tester Foundation Certification