What is the fascination with code metrics?
The answers in this thread are kind of odd as they speak of:
- "the team", like "the one and only beneficiary" of those said metrics;
- "the metrics", like they mean anything in themselves.
1/ Metrics is not for one population, but for three:
- developers: they are concerned with instantaneous static code metrics regarding static analysis of their code (cyclomatic complexity, comments quality, number of lines, ...)
- project leaders: they are concerned with daily live code metrics coming from unit test, code coverage, continuous integration testing
- business sponsors (they are always forgotten, but they are the stakeholders, the one paying for the development): they are concerned with weekly global code metrics regarding architectural design, security, dependencies, ...
All those metrics can be watched and analyzed by all three populations of course, but each kind is designed to be better used by each specific group.
2/ Metrics, by themselves, represent a snapshot of the code, and that means... nothing!
It is the combination of those metrics, and the combinations of those different levels of analysis that may indicate a "good" or "bad" code, but more importantly, it is the trend of those metrics that is significant.
That is the repetition of those metrics what will give the real added value, as they will help the business managers/project leaders/developers to prioritize amongst the different possible code fixes
In other words, your question about the "fascination of metrics" could refer to the difference between:
- "beautiful" code (although that is always in the eye of the beholder-coder)
- "good" code (which works, and can prove it works)
So, for instance, a function with a cyclomatic complexity of 9 could be defined as "beautiful", as opposed of one long convoluted function of cyclomatic complexity of 42.
BUT, if:
- the latter function has a steady complexity, combined with a code coverage of 95%,
- whereas the former has an increasing complexity, combined with a coverage of... 0%,
one could argue:
- the the latter represents a "good" code (it works, it is stable, and if it need to change, one can checks if it still works after modifications),
- the former is a "bad" code (it still need to add some cases and conditions to cover all it has to do, and there is no easy way to make some regression test)
So, to summarize:
a single metric that by itself always indicates [...]
: not much, except that the code may be more "beautiful", which in itself does not mean a lot...
Is there some magical insight to be gained from code metrics that I've overlooked?
Only the combination and trend of metrics give the real "magical insight" you are after.
I had a project that I did as a one person job measured for cyclomatic complexity some month ago. That was my first exposure to these kind of metrics.
The first report I got was shocking. Almost all of my functions failed the test, even the (imho) very simple ones. I got around the complexity thing by moving logical sub-task into subroutines even if they have been called only once.
For the other half of the routines my pride as a programmer kicked in and I tried to rewrite them in a way that they do the same, just simpler and more readable. That worked and I was able to get most down to the customers yclomatic complexity threshold.
In the end I was almost always able to come up with a better solution and much cleaner code. The performance did not suffered from this (trust me - I'm paranoid on this, and I check the disassembly of the compiler output quite often).
I think metrics are a good thing if you use them as a reason/motivation to improve your code. It's imortant to know when to stop and ask for a metric violation grant though.
Metrics are guides and helps, not ends in itself.