Build In Quality For Successful Agile Development

Operating in dynamic markets and facing high end-user expectations, organisations are increasingly adopting Agile methods of application development that enable delivery teams to flex with constantly changing requirements. Done properly, the Agile model is attractive as it enables businesses, throughout the development cycle, to rapidly promote features to production and deliver value, early and often, to users.

Compared to traditional application development methods, including the Waterfall model that can take months or years to deliver to production, Agile typically delivers visible results in weeks. This is because change is delivered to users in small chunks at regular intervals which, crucially, offer time to gain user feedback for refinement.

The problem

In a Waterfall project, it can be many months before the code is properly tested and issues are all-too-often uncovered late, resulting in hasty decisions and costly rework. A Waterfall approach forces senior management and the business to remain in the dark too long, relying on their IT team to communicate exactly what will be delivered via increasingly fictional status reports. This leads to poorly managed expectations and a misunderstanding of key process requirements.

The reality of these projects is that delivery is often delayed, with frequently missed critical requirements, and they are always riddled with issues found late in the testing process. By the time the project status turns “red”, it is too late to retrofit quality, manage communication to key user communities and prevent cost overruns, and schedule slippage.

The longer it takes to deliver real code into production and value to users, the more difficult it is to manage expectations. It becomes harder to get engagement and valuable feedback into the requirements process, so the cost rises to rectify issues, more time is lost and rework is required.

The difference with Agile is that the feedback loop is dramatically shorter and functionality is delivered more regularly. This means that issues are uncovered more quickly – usually within a few iterations and quality checking is built into the development and build process, from the ground up. Quick feedback in Agile projects works to keep the overall cost down.

However, in Agile projects you need to make the right investments in project time, costs and effort to create the best framework for delivery success. An element of this should be in the initial approach and framework for quality and testing. Also, often overlooked in the drive to produce the first features, is that a significant part of the agile team need to be testing experts.

There are two main streams of effort associated with building and maintaining quality. The first is to ensure that new features in each iteration function as required and the second is to ensure that existing features continue to function as expected. To facilitate this, a significant effort during each iteration must be allocated to automating the tests for new features. Also, maintaining an automated regression test pack will ensure that the software continues to work as expected.

This automated regression pack should be integrated into the software build process so that the quality of each new drop of code is checked. This will provide fast feedback for the development team, and form part of the acceptance criteria for the software.

If an automated regression pack is not built, regression testing will soon become a task that cannot be accomplished manually by the agile team. This will either lead to a requirement for more and more testers, as part of the agile team, or a relaxation of quality standards – with reducing coverage and frequency of regression test execution. This will prevent the swift feedback required for ultimate project success and lead to the development slipping back into a waterfall model – with the time between releases increasing.

These issues will become continually more problematic unless an investment is made to pay back the technical debt, by automating the regression pack, so feedback on the software’s quality can be provided within each iteration.

The delivery of high quality software also needs to address factors such as performance, resilience and robustness. If these aspects are only checked on the “finished” product, rather than being defined in the acceptance criteria and built into each design iteration, then it may prove too late to easily rectify. This will create a long and expensive exercise to remedy performance bottlenecks and may require an extra investment in processes, time and hardware to improve availability – while ultimately delivering a system never completely fit for purpose.

However software is tested, there is a limit to what can be covered, within a finite budget and time available. So, it is important to make the software quality as transparent as possible and provide visibility of the overall risk, to assist in making decisions that include whether to delay a release or change the amount of investment in development process quality.

Key tips for success

  • Prior to the development of any software ensure that there is a well-defined approach to quality with a sensible balance between developers and quality assurance (testers)
  • Ensure that there is sufficient budget to build quality into the software from the beginning
  • Implement the tools and frameworks and pilot them to ensure that they will be able to support the quality approach prior to starting the development of features
  • Ensure that features and requirements (both functional and non-functional) have sufficient acceptance criteria prior to the development of the code
  • Continually check the quality of the software providing early and continual feedback and visibility of the overall quality of the software
  • Invest in an automated regression pack and integrate the tests into the build process (CI) to increase agility and improve confidence to make releases
  • Measure and track different aspects of quality to identify potential improvements in the development process and help make investment decisions.

Mark Firth is Endava’s Head of Testing Services. Before joining Endava, Mark was a founder and director of Testing4Finance which was a niche software testing consultancy that was acquired by Endava in August 2012. Prior to Testing4Finance, Mark spent 10 years working for Cresta/SQS Group where he progressed from test consultancy to Business Unit Director and he was instrumental in developing two of SQS’s largest clients. Before focussing on testing, Mark spent 10 years developing commercial products and applications for internal IT organisations.