One of the bigger things that I’ve been working on this year is the introduction of a formal Security Policy for Eclipse projects. Due to the nature of how work actually gets done around here, this has been a particularly interesting challenge. The security policy cannot, for example, make any guarantees with regard to who, what, when, or how vulnerabilities are addressed: these are development issues, and development issues are the domain of the individual projects. I do know from experience that Eclipse projects take the disclosure of a security issue very seriously and that the project committers tend to rally to the cause of fixing these issues. My intuition is that it is a matter of pride, and pride can be a serious motivator. In that light, the policy is intended primarily as a set of guidelines for committers and the community for dealing with vulnerabilities, along with a set of tools and services to make it work.
The policy itself is in draft form and is evolving (most of the discussion has been happening on Bug 337004 and a handful of blocker bugs). My intent is to make it, and the necessary services around security and vulnerability handling available at the end of June (to coincide with the Indigo release). Note that this policy and the practices around it will continue to evolve.
The heart of the Eclipse Security Policy is the Eclipse Security Team. The Eclipse Security Team is the main point of contact for security issues, facilitates triage, and provides security-related expertise in remediation activities. The security team does not–in general–do the actual remediation work. The policy limits the size of this team to a maximum of seven and requires participation from the Eclipse and RT top-level projects (based on the assumption that these projects are mostly likely the biggest targets for security issues). The Security Team initially also includes a representative from the Eclipse Foundation staff (i.e. me). While this inclusion is not specifically baked into the policy, I foresee it being a permanent feature. The primary reason for keeping the security team small is to reduce the risk that information is disclosed too early. It’s not that we don’t trust a larger body like the Architecture Council, it’s really just a large numbers thing: the more people you add into the equation, the greater the risk of accidental disclosure.
The main point of contact for the Eclipse Security Team will be an email alias, email@example.com (not established yet); this alias is not itself secure. For matters that require secure communication, an Eclipse Foundation staff member (i.e. yours truly) will make their public key available (this is not baked into the policy). That staff member is also the main point of contact for external organizations like CERT.
The policy provides two means of reporting vulnerabilities: the Eclipse Security Team via email, or Bugzilla. The email option is pretty obvious. For vulnerabilities reported via Bugzilla, there is an option to limit visibility of the record to “committers-only”. With this flag enabled, only committers, the reporter, and anybody copied on the bug are able to view the bug. The reporter can set this flag, or it can be set after-the-fact during the triage process. You can query bugs with this flag by finding those bugs with the “group” set to “Security_Advisories”.
As long as this flag is enabled, the bug is only visible to committers and those individuals explicitly listed on the bug. This makes progressive disclosure possible (known adopters can be copied on the bug to give them a ‘heads up’, for example). The bug can’t stay in this state indefinitely: the community must be informed. The timing of that disclosure is probably one of the more controversial aspects of the policy. I’ll address that in my next post.