develooper Front page | perl.module-authors | Postings from June 2008

Fwd: New CPANTS metrics

Thread Previous | Thread Next
Eric Roode
June 10, 2008 08:03
Fwd: New CPANTS metrics
Message ID:
Apologies to the list; I mistakenly sent this rant to Gabor Szabo
only, when I meant to send it to the whole list for discussion.

-- Eric

---------- Forwarded message ----------
From: Eric Roode <>
Date: Tue, Jun 10, 2008 at 11:01 AM
Subject: Re: New CPANTS metrics
To: Gabor Szabo <>


   I do not wish to rain on anyone's parade, and I do not wish to
quash anyone's enthusiasm for improving Perl or its wide-ranging
library of modules.  However:

   I cannot for the life of me figure out what sort of value there is
to CPANTS or the Kwalitee metric.  It is a garbage bin of random
metrics that people dream up, regardless of whether they are useful to
the real-world end-user or to the module author.  Metrics are good
only when they measure something in which there is value in improving,
and which authors can reasonably improve.  Otherwise, it's like
measuring programmer productivity by counting lines of code, or by
counting the number of bugs fixed.  Yes, there is *some* correlation
to quality, but only a vague one.

   Looking over the current tests at, I see:

* Valid metrics, most of which probably have _some_ usefulness:

* Then there are these, which are probably valid metrics, but of
questionable utility:

   buildtool_not_executable:  Nearly everyone does "perl Makefile.PL"
or "perl Build.PL".  Note that this does not specify a specific perl,
just the first one in the user's PATH.  I'll bet that 95% of the rest,
who presumably do "./Makefile.PL" or "./Build.PL", have only one perl
installed anyhow. I agree with the reason for this test, but I
question whether it's really doing anyone any good.

   extracts_nicely: How much of a problem is this, really?

   has_buildtool: How many CPAN modules does this even apply to?
   has_working_buildtool: ditto

   metayml_conforms_to_known_spec: valid metric; how useful is this, though?
   metayml_is_parsable: Isn't this redundant to "conforms to known spec"?
   metayml_conforms_spec_current: Yeah, whatever.

* Then there are the following tests, which as far as I can tell are useless:

   extractable: How many CPAN modules have EVER been released with a
weird packaging format?  Is this addressing a real problem in the

   has_humanreadable_license: I suppose there are some developers in
big companies somewhere who have to pore through distributions to find
licenses to see whether they're allowed to use the module... but is
this really a big problem?

   has_readme: ditto

* Finally, we come to the large body of misguided and unuseful tests:

   has_test_pod: Useless.  Why should every end-user have to test the
correctness of the module's POD?  The original developer should test
it, and the kwalitee metric should test that the module's POD is
correct (no_pod_errors).  Including it as part of the module's test
suite is useless.

   no_cpants_errors: Module author should not be dinged for bugs in
CPANTS testing.

   proper_libs: Misguided.  Why should end-users care about the
structure of the module build tree?  They shouldn't.

   use_strict: Misguided.  "use strict" is a valuable tool for
developers, but it is not necessary (or even desirable) in all

   use_warnings: Misguided.  Maybe my module doesn't need warnings.
Maybe I tested all relevent cases and got no warnings.  Why should the
end-user's code need to load yet another module (even if small) just
so I can get one more kwalitee point?

   has_example: Possibly useful, but poorly implemented (or possibly
poorly documented).  Most modules that include examples do so in an
"Examples" section of the POD, not in a separate file or directory.
The has_example documentation implies that it'll only be satisfied by
a separate file or directory.

   has_tests_in_t_dir: Misguided.  Why should end-users care about
the location of the tests, so long as they all get run?  They

   is_prereq: Awful.  I write many of my modules to without depending
on prerequisites; this reduces the load on end-users.  I expect that
many other module authors do the same.  Should I include other
authors' modules just to improve their kwalitee scores?  More
importantly, why should Acme::FromVenus get a point for this test,
just because the author put up a dummy distro, Acme::FromMars, which
uses it?   Some of my modules are very useful for end-users' code, not
so much for module developers' code.  So I get dinged for this?

Now let me address Szabo's two questions:

On Mon, Jun 9, 2008 at 4:32 PM, Gabor Szabo <> wrote:
> There are two issues regarding the criticism:
> 1) I did not find any mention of any new metric that would be good.
>    I'd be really glad to hear ideas of what could be a good metric?

Here's my suggestion:  Don't add metrics for the sake of adding
metrics.  "More metrics" does not mean "better metrics".  Rip out
two-thirds of the existing kwalitee metrics and discard them.  Stick
to reporting things that are going to actually improve something.

> 2) True, it would be great if more of the module authors knew about
>     CPANTS and cared. I agree so how could we let them know about
>     it besides posting on and on this mailing list?
>     Maybe ? Any other ideas?

I know about CPANTS and kwalitee.  I understand what they are trying
to do.  I cannot see at all why I (or anyone) should bother
participating, as a module author or as a user of modules.

My two cents,
-- Eric

Thread Previous | Thread Next Perl Programming lists via nntp and http.
Comments to Ask Bjørn Hansen at | Group listing | About