Thursday, February 22, 2024
HomeRecommendInvesting in testability leads to increased productivity

Investing in testability leads to increased productivity


Testability is translated as testability and test feasibility.

To improve the quality of software, it is important not to try to improve the quality only in the test process, but to look at the test process from the development stage.


In this article, we will introduce the concept of “testability” which is directly linked to the productivity of the test process, and the points to keep in mind to create a system with high testability.

What is testability?

In other words, it indicates whether the test is easy to run and whether the required test can be performed sufficiently.

High testability means that the required tests are easier to run and defects are easier to find.

On the other hand, if the testability is low, it is impossible to execute the test in the first place, even if it can be realized, a huge cost will be incurred, or the defect detection rate will be low because the test can be performed only by a limited method. That happens.

Various indicators of testability

Testability, that is, the index for evaluating testability/feasibility is not limited to one, but is evaluated from multiple angles using multiple indexes. Here, we will introduce the quality model of testability proposed by James Bach as a representative index of testability.

Testability quality model


It is an index to measure whether the test can be executed smoothly. For example, it is less likely that bugs that interfere with the test will occur while the test is running.


It is an index to measure whether it is easy to check the output contents of the test target and the errors/defects that occur. For example, when an error occurs, it is easy for the tester to detect the error, and there is a way to obtain the internal status of the system from the outside.


It is an index to measure whether it is easy to operate and control when performing a test.
For example, the internal status of the test target can be manipulated according to the test content.


It is an index to measure whether it is easy to separate the range to be tested and the range to execute the test for the system to be tested. For example, the components that the test target depends on can be replaced by stubs and mocks.


It is an index that measures the simplicity of the code structure and functional specifications to be tested. For example, there are no unnecessary call calls or redundant code structures.


It is an index to measure whether there are a few system changes that increase the burden of test design and execution. For example, there are few interface specification changes, design policy changes are unlikely to occur, and even if they do occur, the cost of modifying the design content of the test is low.


It is an index to measure whether it is easy to extract the necessary information to be tested when designing a test. For example, it is easy to understand information such as API specifications without being familiar with the internal structure.

Testability and reliability

The quality model of testability proposed by James Bach above can be said to be mainly in the test implementation stage. In addition to this, testability may also be focused on reliability.

For example, in the development of systems that require high quality, such as artificial satellite software, the required tests are rigorously reviewed according to the design method.

If, as a result of the review, you conclude that the design method cannot be fully tested, the design content will be scrapped and may be redesigned.

Relationship between ease of testing and software structure

As mentioned above, testability and software structure at the architectural level are closely related, so it is necessary to design with “ease of testing” in mind from the development stage.

Let’s take an automated test as an example to explain in a little more detail.


Automated testing is one option for improving test quality, but when implementing automated testing, it is assumed that the system structure is highly testable.

Poor testability can make it difficult to implement automated testing, or it can lead to major remodeling of automated testing with each system refurbishment, which can increase the cost of maintaining quality.

Therefore, when implementing automated testing, it is necessary to review the system under test for a highly testable design at the architectural level before building an automated testing environment.

Development points for improving testability

There are three points to keep in mind when designing to improve testability: “control points,” “observation points,” and “joints.” Below, we will introduce the features of each concept.

Control Point

It is a concept that focuses on the operability of the test target part and is an implementation that enhances controllability. To implement code-level testability from this point of view, for example, input control using API and rewriting the contents of variables with a debugger can be mentioned.

Observation Point

It is a concept that focuses on the output content of the object to be tested. As a means to acquire the output content of the test target, implementation such as output by API or comparison of output content and expected value using test spy can be mentioned.

Joint (Object Seam)

It is a concept that focuses on the point where the component under test and another component are related. By isolating the test target, the ease of disassembly and execution is improved. One example of this is a technique such as “Dependency Injection” that allows variables and classes to be dynamically given from outside the component.

Measurement of the effect of improving testability and continuous improvement

The first measure of the effect of improving testability is how much the productivity of test execution has improved, how much the coverage of automated testing has been expanded, and so on.

In addition to this, measures aimed at improving testability will have a positive impact not only on test engineers but also on development engineers. For example, it can be expected to reduce the development effort, such as making it easier to debug and allowing the same test to be used repeatedly, so it is a good idea to measure the effect of improving testability.

On the other hand, it should be noted that measures to improve testability may have a trade-off relationship with performance degradation in some cases. For example, outputting logs to improve observability may cause speed degradation of specific processing, so it is necessary to pay attention to the overall balance.

The effectiveness of measures to improve testability may diminish due to changes in the software architecture throughout the development cycle.

In this way, it is necessary to confirm what kind of effect was obtained by improving testability every time the development cycle is repeated and to continuously review, optimize, and make improvements.

Evaluate the impact of measures to improve testability on the system and the expected effects of cost reduction and quality improvement throughout the development.

in conclusion

In this article, I explained the concept of testability and the relationship between evaluation indicators, testability, and software structure.

Introduce the concept of testability, which is the cornerstone of the entire development process, to improve the overall productivity of software development, as well as the quality and reliability of your products.

If you ever want to know about similar things, check out the Facebook page Maga Techs.



Please enter your comment!
Please enter your name here

Recent Posts

Most Popular

Recent Comments