Before we can deliver a software product, we need to measure its quality to ensure that it is as bug-free as we possibly can make it. However, in order to do this, we first need to know what software quality metrics we are measuring against.

What are the metrics for measuring software quality?

The metrics for measuring software quality can be extremely technical, but can be boiled down into the following four essential categories:

Code Quality

Bug-free and semantically correct code is very important for premium software. Code quality standards can be divided into quantitative and qualitative metrics. Quantitative quality metrics measure how big or complex the software program is, the number of lines and functions it contains, how many bugs there are per 1,000 lines of code, and more. Qualitative code quality metrics measure features like maintainability, readability, clarity, efficiency, and documentation. These metrics measure how easy the code is to read, understand, and if it is written according to coding standards.

Performance

Every software program is built for a purpose. Performance metrics measure if the product fulfils its purpose and if it performs the way it is meant to. It also refers to how the application uses resources, its scalability, customer satisfaction, and response times.

Security

Software security metrics measure the inherent safety of a software program, and ensure there are no unauthorised changes in the product when it is handed over to the client.

Usability

Since all software products are built for an end-user, an important quality metric is whether the program is practicable and user-friendly. We also ensure that the client is happy with the features and performance.

When do we measure software quality?

Our software development team and Quality Assurance (QA) team work together to ensure that the software quality is of the highest standard. The QA team does product testing, once developed. However, the development team also maintains, measures, and constantly improves the software quality during the build. However, while we maintain software quality across every stage of development, we may test them at different points in the development, based on the development methodology used. We use two methodologies when developing software applications – Waterfall and Agile. Since the two methodologies deliver the product in different ways, they are tested differently as well.

Measuring software quality: Waterfall Methodology

Waterfall methodology is when we plan, execute, test, and deliver in distinct phases. Each phase is completed before the next one begins. As a result, with a product developed using this methodology, we need to maintain the quality of the product at every stage – requirements, design, implementation, verification (or testing), and maintenance. Since the testing is done at the end of the build, it takes less time and does not require much regression testing.

Measuring software quality: Agile

The Agile methodologies are more responsive and flexible, where the development is broken up into phases, or sprints. The goal is that at the end of each sprint, which can be between two to six weeks long, we deliver a high-quality minimum viable product that is fully functional and tested. This means we have to make sure we maintain product software quality at each step, in each sprint. Products developed using the Agile methodologies are tested more often. However, it also means that they need constant regression testing to ensure that an update hasn’t broken the functionalities that were tested and passed in earlier builds.

How do developers maintain the software code quality?

A good developer is one who can deliver high quality software code with minimal bugs. We say ‘minimal’ because, during development, some bugs are inevitable and what matters is how we fix or control them. That is why developers measure their code quality as they develop, since it means they can identify and fix any problems during the build. They measure their code against coding standards, code reviews, code analysers, and refractor legacy code.

At this stage, software quality is tested manually with short unit tests. A unit test is the first stage of software quality measurement, where the smallest testable part of the software – a module or component of the program or even a single function within the code – is checked.

For example, there might be a number of data fields that need completing as part of a larger software. A unit test might just test the first field and not the others, or indeed any other part of the software program.

The developers create a shared library of hundreds of such tests, with repeatable functionality embedded in the software, so these tests can be used over and over again, across projects for efficiently detecting errors in the software code at the development stage. They also conduct automated testing using a code analyser, SonarQube, which checks software for:

  • Clarity
  • Maintainability
  • Documentation
  • Extendibility
  • Efficiency
  • Well-tested
  • Secure coding
  • Code refactoring
  • Extendibility

It helps us:

  • Conduct code reviews
  • Maintain coding standards
  • Identify bugs and the number of potential bugs in the software

We also use it to assess:

  • The structural complexity of the program (number of lines of code)
  • Any vulnerabilities found in repositories
  • Code smells (code that is confusing or difficult to maintain)
  • Code coverage (measure of code covered by unit tests)
  • Code duplication (amount of code that is repeated)

How does the QA team measure software code quality?

QA testers review all the metrics of software quality through manual and automated testing (using Selenium), including the validity and standard of the product code. Manual test metrics can be divided into two classes – Base metrics and Calculated Metrics. Base metrics are made up of the raw, unanalysed data that is collected, while calculated metrics are derived from the information that was collected in the base metrics.

Manual test metrics

Some of the important manual test metrics that we consider for software quality are:

  • Test case execution productivity metrics
  • Test case preparation productivity metrics
  • Test duration
  • Unit test coverage (the amount of software code that is covered by unit tests)
  • Pass/fail percentage of tests, etc.

Automation test metrics

Automation testing can help reduce the amount of manual time spent in testing software quality. Here are some of the important metrics for automation testing that we consider:

  • Total test duration
  • Unit test coverage
  • Path coverage (how many linearly independent paths of the program the test covers)
  • Requirements coverage
  • Pass/fail percentage of tests
  • Number of defects
  • Percentage of automated test coverage (against the total test coverage which includes manual testing)
  • Test execution (total tests executed during the build)
  • Useful vs. irrelevant results
  • Defects in production
  • Percentage of broken builds, etc.

Other types of tests for measuring software quality

We also use various types of tests to measure software quality. These are:

  • Functional Testing
  • Test to Break
  • Load Performance Testing
  • Regression Testing
  • Security Testing
  • Penetration Testing
  • User Acceptance Testing