I continually think about ways that we can improve the Eclipse Development Process and the various bits of infrastructure and process that exist to support it.
The whole review process has always been troubling for me because it’s really turned into a bit of a rubber stamp process.
Early on, reviews were something that were attended. A project undergoing a review was required to assemble presentation materials that would be delivered on a conference call with interested parties that included project members, relevant PMCs, adopters and other members of the community. It didn’t take too long for us to determine that these calls were pretty pointless: attendance was low, and–while there was sometimes some interesting related discussion–direct discussion of the review itself tended to be one-sided (i.e. the project representative did all of the talking). So the call became optional in favour of mailing-list/forum based discussion. Eventually, we just dropped the idea of the call altogether.
Today, we’ve changed the notion of a review event into a review period. Previously, review material had to be posted a full week in advance of the (potential) call so that that participants could be prepared. We turned this week of materials-review time into the review itself. During this review period, the community is encouraged to comment and question the review. At the end of the review period, the EMO makes a decision on whether or not the review is successful.
This is where the rubber stamp part comes in.
By this point, the IP team has–if necessary–reviewed the IP Log for the project and has determined that everything is okay; the project’s PMC has approved the event; and the EMO has confirmed that the review materials contain the required information. From a process point-of-view, everything is pretty much in order before the start of the review period (there are exceptions, but this is the general rule).
At the end of the review period, the review is deemed successful. In my tenure, I have never failed a review (with the possible exception of the TPTP Termination review, which I don’t believe ever actually got scheduled). So I find myself asking: what’s the point of the review period?
Reviews are undertaken primarily to inform the community of key events in the lifecycle of a project and give them opportunity to react. Adopters, for example, need to know about releases so that they can take advantage of new features, or react to API changes. But the review itself comes very late in the development cycle; any adopter who is not involved with a project far in advance of the release review, isn’t going to get very much advantage from that week long review period that precedes the actual release by a couple of days. No, an adopter who needs to keep abreast of developments in a project–much like the project committers themselves–is going to need to be involved very early in the development cycle. This is why project plans, and ongoing communication are important (probably in the opposite order).
Frankly, I’m not sure if reviews are useful or interesting for the user community; users probably get more value from a “new and noteworthy” document than they would from a release review document.
So what is the next step in the evolution of our review process? I have some thoughts about the various types of review that I intend to discuss in the coming days and weeks. In the meantime, if there’s something that you feel you can contribute to the discussion, I’d love to hear from you.