Programming Tools: Code Complexity Metrics
Recently, I was asked to maintain some old code and test some new code. Both tasks required that I understand the code with which I was working. Most of the time, these jobs are non-trivial due to the complexity of most code. At least four items can make things complex:
the obvious problem of understanding what is written
what the code is supposed to do, both at the macro and micro level
the environment in which the code is to run
the assumptions made about each of these things
A lot of tools are available to help sort out each of these issues. Few programs out there, however, try to measure the complexity of the code. I define complexity of code as the amount of effort needed to understand and modify the code correctly. As I explain in this article, computing complexity metrics often is a highly personal task. Also, few metrics have been shown to be of real value in determining the amount of effort needed to maintain or test code.
Performance metrics measure how well a valid program executes. Profiling tools fall into this category, and many tools are available. But for maintenance metrics, there are surprising few tools. Therefore, this column concerns creating a maintenance metric tool that measures complexity. It can be used as a prototype for general tools in other languages.
Maintenance metrics also are called static metrics because they are based on the source code. I subdivide maintenance metrics into formatting metrics and logical metrics. Formatting metrics deals with such things as indentation conventions, comment forms, whitespace usage, naming conventions and so on. Logical metrics deals with such things as the number of paths through a program, the depth of conditional statements and blocks, the level of parenthesization in expressions, the number of terms and factors in expressions, the number of parameters and arguments used and the like.
Complexity metrics depend on both of these types of maintenance metrics. For example, poor naming conventions can make any program hard to understand, and poor logical constructs can add to the difficulty of dealing with the code.
Another thing to notice is the number of factors that can be defined when measuring complexity. For example, you may find something easy to understand, while I may find it difficult. Given this, it would be presumptive of me to tell you how complex your code is. To solve this problem, I remembered how relational databases handle reporting.
If there is some way of finding elements of a program, putting them into a explicit context and then writing them to a row in a table, we then could use SQL-like facilities to analyze the data in any desired way. That said, we can cheat a little. While writing the records, standard metrics can be computed that may handle most needs.
The McCabe Cyclomatic Metric was introduced by Thomas McCabe in 1976. It probably is the most useful logical metric. It measures the number of linearly independent paths through a program. For example, for a simple function that has no conditionals, only one path exists. This straight-line code usually is easy to follow. Programs that have many conditionals, in turn, are harder to follow. The difficulty also increases if multiple ways of exiting the program exist. That is why it often is a headache to debug a program with many exits.
The McCabe metric is:
M = E - N + X
where M is the McCabe Cyclomatic Complexity (MCC) metric, E is the number of edges in the graph of the program, N is the number of nodes or decision points in the graph of the program and X is the number of exits from the program.
In programming terms, edges are the code executed as a result of a decision--they are the decision points. Exits are the explicit return statements in a program. Normally, there is one explicit return for functions and no explicit return for subroutines.
A simpler method of computing the MCC is demonstrated in the equation below. If D is the number of decision points in the program, then
M = D + 1
Here, decision points can be conditional statements. Each decision point normally has two possible paths.
MCC also is useful in determining the testability of a program. Often, the higher the value, the more difficult and risky the program is to test and maintain.
Some standard values of Cyclomatic Complexity are shown in Table 1:
Table 1. Standard Values of Cyclomatic Complexity
Cyclomatic Complexity | Risk Complexity |
---|---|
1-10 | a simple program, without much risk |
11-20 | more complex, moderate risk |
21-50 | complex, high risk |
51+ | untestable, very high risk |
One final word on MCC that also applies to most of the other metrics: each element in the formulae is assumed to have the same weight. In MCC's case, both branches are assumed to be equally complex. However, in most cases this is not the case. Think of the if statement with code for only one branch--yet each branch is treated as having the same weight. Also, measures of expressions are all the same, even for those that contain many factors and terms. Be aware of this and be prepared, if your tool gives you the ability, to add weight to different branches. This metric is called an Extended McCabe Complexity Metric.
A number of commercial tools are out there, as well as some open-source ones. None seem to give you the ability to define your own metrics, however, or to use extended versions of existing metrics. Also, of the open-source tools I found, most were outdated or no longer maintained. The one exception is the Eclipse Metrics Plugin.
For the rest of us, I decided to write an open-source program to produce metrics that end users can compute and modify. The program is written in Python and currently is limited to analyzing Python--thus the name PyMetrics--but the principles can be extended to any language. By writing the code in Python, you should be able to understand the program better than if I had written it in almost any other language. The major advantage of this approach is the output format. It lets you work with the raw numbers to produce your own reports and metrics.
The PyMetrics project is hosted on SourceForge.net, and the files can be downloaded from the project page. Please note that due to the short time I had to produce the program, it is not as polished as I would have liked. The things I would have liked to do include but are not limited to:
Showing what are considered industry standard metric values as a point of comparison for your metrics.
Simplifying some of the modules, including tokenize and the main module, PyMetrics.
Providing better modularization to allow others to extend this program more easily.
Defining a set of test suites.
As always, offering better documentation.
Metrics are numbers representing the complexity of a given program. Although some accepted standard measures exist, you need to decide what works for you and the acceptable thresholds.