develooper Front page | perl.qa | Postings from July 2008

Re: Making CPAN ratings easy (was Re: CPAN Ratings and the problemof choice)

Thread Previous | Thread Next
From:
Rick Fisk
Date:
July 1, 2008 00:09
Subject:
Re: Making CPAN ratings easy (was Re: CPAN Ratings and the problemof choice)
Message ID:
4869D812.2060400@drivebuytech.com
Paul Fenwick wrote:
> [ CC'ed to rethinking-cpan, since I assume it's halfway relevant.  If 
> not, let me know, and I'll get out of your face. ]
>
> Greg Sabino Mullane wrote:
>
>> Why are people not rating modules?
>
> Because rating modules is a monumental pain in the arse.
>
> == The CPAN Ratings Way ==
>
> Let's pretend you're J. Average Hacker.  You've popped over to CPAN 
> because it provides a nice way of reading the documentation on a 
> module you've just started using, let's say Moose.  Here's the webpage 
> you see:
> <snip>
Exactly. So, are there reasonable conclusions one can make from 
available information without injecting some sort of artificial judgement?

I write both Java and Perl and one of my personal pre-requisites for 
going to the trouble of downloading jars or CPAN modules is that the 
project is active. I have found on numerous occasions that a promising 
library or module is not actually useful because:

The authors had a fight, split up and coded similar projects to spite 
each other.
The library is now deprecated and replaced by a better project.
The library sucked and the documentation was worse.
The library or module was not embraced and has since been abandoned 
regardless of its utility.
The library was great when introduced but nobody fixed critical bugs, 
rendering it useless.
The library didn't keep up with changes in dependencies and the user 
community has since moved on.

The ratings may be totally useless for whatever reason but there are 
some metrics which may prove useful and which do not require judgments 
by the publishers. The following is what I try to discover, if possible, 
before relying on somebody else's code.

ie;

Number of downloads -
    perhaps this could be compared to various release versions of the 
module to keep it pertinent.
Number of bugs reported vs bugs fixed/closed -
    a module with a good number of bugs reported in my opinion is an 
indication of its success rather than failure. Coupled with number of 
downloads you can judge how apparently useful it is to the community of 
users. Lots of reported bugs but no fixes and a downward trend in 
downloads might tell the observer to avoid the module.
Number of releases/interval -
          If there are a fairly regular set of releases over time, then 
I would probably judge the module as worthy of a download - assuming it 
also is getting downloaded by  other users. I can probably count on bugs 
getting fixed.
First Release/Last Release date - When was it introduced and how often 
has it been updated? (as a digression, why is Devel::Cover - a module 
released over 5 years ago and with several updates still labeled 'alpha' 
by its author?)
Number of references to the module returned by search engines that are 
not perl.org - regardless of content - if a lot of people are talking 
about it, it's probably being used a lot.

SourceForge has some fairly useful metrics for hosted projects that 
report similar things. I have found them very useful.

Of course making it easier to actually rate modules would be useful, 
though the ratings are more subjective and personal than say "number of 
downloads". I wouldn't recommend referring to either the metrics I 
suggest or the reviews as a measure of quality. They are merely numbers 
and have to be interpreted by the consumer.



Thread Previous | Thread Next


nntp.perl.org: Perl Programming lists via nntp and http.
Comments to Ask Bjørn Hansen at ask@perl.org | Group listing | About