A topic of interest to me is the different ways it is possible to view software quality, so I was delighted to find a new way to categorize software quality while reading Growing Object-Oriented Software, Guided by Tests, by Steve Freeman and Nat Pryce. In it, they propose viewing a software codebase in terms of its internal and external quality. Fascinated, I decided to do a bit of research into the approach to see what I could learn.
The earliest example I have found of the internal/external categorization was from Steve McConnell’s Code Complete , where he describes the characteristics of internal and external quality:
External Quality Characteristics: Correctness, Usability, Efficiency, Reliability, Integrity, Adaptability, Accuracy, and Robustness.
Internal Quality Characteristics: Maintainability, Flexibility, Portability, Re-usability, Readability, Testability, and Understandability.
These characteristics are reasonable enough, but as Steve goes on to say the “difference between internal and external characteristics isn’t completely clear-cut because at some level internal characteristics affect external ones”.
For me though, the breakdown of software quality into Internal and External Quality is pretty simple:
Internal Quality determines your ability to move forward on a project
External Quality determines the fulfillment of stakeholder requirements
Software with a high internal quality is easy to change, easy to add new features, and easy to test. Software with a low internal quality is hard to understand, difficult to change, and troublesome to extend. Measures like McCabe’s Cyclomatic Complexity, Cohesion, Coupling and Function Points can all be used to understand internal quality.
External software quality is a measure of how the system as a whole meets the requirements of stakeholders. Does the system provide the functionality required? Is the interface clear and consistent? Does the software provide the expected business value?
Why have the distinction? It is clear that a software project needs both External and Internal quality in order to succeed, but what value does the categorization provide?
- The business case for unit/integration/system tests is much clearer
- The motivation for a particular type of test is easier to understand
- Understanding the motivation for a particular test avoids mixing test abstraction levels
Feedback from tests
In their book, Growing Object-Oriented Software, Guided by Tests, Steve Freeman and Nat Pryce have a wonderful illustration relating the level of internal/external quality feedback certain test-types can give you.
What is interesting about this definition is that using different testing approaches you can garner quality feedback of different types. It shows that end-to-end system tests provide to most amount of feedback on the external quality of the system and unit tests provide the most amount of feedback on internal quality.
It also underlines the importance of multiple testing approaches. If you want to make a system that both meets the stakeholder requirements and is easy to understand and change (who doesn’t?) then it makes sense to develop with both unit and system level tests.
How to Automate Internal and External Quality Feedback
There are many well established methods for monitoring and ensuring internal software quality. In fact, this has been a major part of the agile software movement. Practices like unit testing, TDD, code reviews, pair programming, static code analysis, code coverage and the like all can form a good process to ensure internal quality.
The picture isn’t quite as clear-cut for External Quality. The most popular techniques are continuous integration, end-to-end systems tests, and iteration demos with stakeholder feedback. In addition, there are two relatively new techniques that can be employed: Behavior Driven Development and Mock Objects.
A full explanation of BDD is outside the scope of this article, but the gist of it is in creating executable specifications, usually in a domain specific language, driven by business value. These specifications are to be created in concert with stakeholders to support automation of executable requirements.
A New Approach to Test Coverage
What this shows is that code coverage alone is not a good enough metric for ensuring internal and external quality. A part of the system might have 100% coverage from integration tests but be excruciatingly difficult to change. Alternatively a part of the system might have great unit tests but have a terrible interface, or worse, provide no business value at all.
I have found that the division of quality into external and internal has been a useful tool to help appreciate the value of different testing types and strengths and weaknesses of approaches in different contexts. This is why I am now leaning towards measuring code coverage separately in two execution environments: firstly from the integration test suite and secondly from the unit test suite. This will provide a much richer picture of the internal and external quality of your system.
 Growing Object-Oriented Software, Guided by Tests Steve Freeman and Nat Pryce. ISBN: 0321503627.
 According to Wikipedia the definition comes from p558 in the first edition, but I found it in the second edition on page 463.
Comparing internal and external software quality measurements:
Page on the Portland Pattern Repository Wiki: