By Martin Uhlig
Certainly some developers and testers in the agile environment are familiar with the following situation. A team had been working on a product for a long time, but they did not have a dedicated tester. As a result, quality requirements had rather been neglected. But now – just before the product release – ever y thing should get better and the team would be supported by additional testers.
At the beginning of this year, I became involved in a similar situation when I was assigned as a tester to a Scrum team that had suffered from a high turnover of testers. Apart from the unit tests that the developers had established, there were no automated tests. But as the product release moved closer, the team had to deal with various short-term decisions regarding product changes. We had to find a way to test the product’s features as well as its non-functional criteria in a fast and reliable manner.
To start the automation of integration tests and GUI tests during this stage of the project would have been fatuous. We needed a manual solution.
In collaboration with our Product Owner (PO), we created a concept to organise a pure quality assurance sprint (QA Sprint) only for testing, fixing, and retesting. No feature stories had been planned for this sprint.
But how could we test the whole product in such a short time span? It is not possible to perform the test scope needed for a convincing and informative result in a two-week iteration – not even with all the nine team members. After all, it was necessary to cover different configurations of the software with tests. But we were very lucky to receive special help from five additional testers and developers who had agreed to support our tests. So we had enough workforce. But how to manage all these people?
From the outset it was obvious that the team could not simply be extended. There is no way of conducting an effective Sprint Planning or Daily Scrum with 14 team members (plus Scrum Master) and the whole team would have had to reorganise itself. This was no practicable solution, not even for only one sprint. As a consequence we had to find another way to integrate the additional testers.
But what approach would work? The answer seems simple – we needed more teams! But a new Scrum team cannot simply be conjured out of thin air, especially for just one sprint. So we had to distinguish between the Scrum team and the QA teams. The Scrum team, consisting of the former team, should basically work as usual in the best Scrum manner. But to strengthen the Scrum team we needed the additional testers, but could not get them aboard the Scrum boat due to the reasons mentioned above. And so we had to create two new teams that were substantially self-organized but not a fixed part of the Scrum team. These QA teams focused on repeatedly running a given set of tests. The test sets had been worked out and iteratively improved by the tester in the Scrum team (supported by the whole Scrum team). The work of the QA teams was to be strictly separated from the Scrum team to avoid lowering the performance of the Scrum team. Therefore the teams needed an inter face to filter the exchange of information in the direction of the Scrum team, thus avoiding an information overflow (esp. only actual bugs, no duplicates, etc.). The Scrum team itself would focus on reproducing and troubleshooting the bugs. We decided to staff the interface between the teams with the PO and the tester from the Scrum team. The only exception to the team separation was the Daily Scrum meetings. Besides the Scrum team, one agent from each QA team attended to give their current status.
This plan was refined so the teams were better balanced. One of the Scrum team’s developers moved to a QA team to test with them. Thus, every QA team had three members, including one experienced tester who headed the execution of the tests. Furthermore, the Scrum team had two relatively new members who were not sufficiently trained to do quick and effective bugfixing of this complex software. These colleagues were given the mission of performing exploratory testing apart from the test sets and reproducing bugs.
In summary, our final setup was with two QA teams executing a set of tests. These tests included all the positive and the most important negative test cases. Every team worked with a different product con- figuration. The QA teams’ tests were supplemented by explorative tests from the two fresh developers in the Scrum team. Six developers within the Scrum team took care of bugfixing and deployment. This way, two teams were established and they supported the Scrum team without any significant negative effects for the self-organization of the Scrum team (shown in Figure 1). The atypical ratio between developers and testers in favour of developers was less problematic because the PO had enough old known issues to be fixed in her backlog, keeping the developers busy till the testers reported the first bugs.
The QA sprint started just like any other sprint for the Scrum team, except that the Sprint Planning was shorter than usual. Only a few known issues were presented by the PO in Sprint Planning. During the sprint the PO and the Scrum team’s tester evaluated the bugs found by the QA teams and added them to the Scrum team’s sprint backlog in a prioritized manner.
Besides the Scrum team’s Sprint Planning, there was a kick-of meeting for the QA teams. They were given the instruction to run the positive and negative test cases from the committed test sets and to document the bugs. After the execution of the whole test set (duration approx. 2 days) it was reviewed and improved by the teams’ impressions. Finally, the test sets were completed by retesting the bug fixes of the current version. After that procedure, the test sets were ready to be executed in the QA teams’ next iteration. As a result, both QA teams always had the same version of the product for each iteration, but with two different configurations. Due to the fact that the latest version was freshly installed for the QA teams in each iteration, the installation and update mechanism of the software was repeatedly tested by the PO and the Scrum team’s tester.
To thank and to motivate the QA teams, we used some elements from the concept of gamification. For example, we launched some awards and small prizes, such as for the most critical bug or the QA team with the most bugs found.
The kick-off was the official star t for the QA teams. For the first complete execution of the test cases they needed exactly the assumed time of two days. As an additional advantage, every team had a member who knew the product in advance. So the other members could benefit from them in their steep learning curve.
On the first day, the Scrum team worked on the previously known issues until the first bugs were delivered by the QA teams. Additionally, the two exploratory testers were able to produce some interesting insights during their tests, which could be transformed into reproducible bugs. After the first two days, the first bugs and issues were fixed. The retests for these and other improvements (mainly initiated by the QA teams) were taken into the test sets for the next test iteration. At the Daily Scrum, the agents from the QA teams reported on their team’s progress and took important information to the QA teams as planned.
After each test iteration, the iteration duration of both QA teams diverged from one another. Therefore we instructed the faster team to retest old bugs that we had fixed and retested several sprints ago. The idea panned out as the team actually found some old errors – if only a few – that had obviously been reinstalled after fixing.
The QA sprint was a major success for the project. On the last day of the sprint, the PO was able to successfully perform the acceptance test and release the product. As a result of this QA sprint, the team managed to considerably boost the quality of the product.
The Scrum team and the QA teams appreciated the nature of the cooperation between the teams. The QA teams benefited from the clear interfaces because they were given peace of mind whenever they needed something. Any questions from the QA teams could be answered very quickly and validly without spending a long time looking for the correct contact person. On the other hand, the members of the Scrum team had their workload significantly reduced as a con- sequence of this interface between the teams. Thus, they could focus on their work and were only given relevant and revised information by the QA teams.
However, we underestimated the effort required to create the initial version of good test plans. The same applied to the time to filter, evaluate, and revise the bugs before we submitted them to the Scrum team. But we still managed this because we could count on the QA teams and we knew that we could waive the retests within the Scrum team apart from the usual ones because the QA teams did the job.
That everything worked out as intended is a big credit to our PO, who is very open-minded and has a quality assurance background. She was always open to any suggestions and comments. In other projects and with another PO this concept would certainly have needed more negotiation with the PO and other stakeholders. Additionally, we could benefit from the ability to fall back on our capable and motivated col- leagues who supported us in the QA teams.
Finally I can report that, the quality assurance in that project is now in a stable development and the product has been successfully established on the market. Currently, a large and good choice of automated tests is available, which are running in continuous integration. The experiences of this QA sprint have certainly had an important influence on the project’s success story.