Though many of the details have yet to be hammered out, the move marks the beginning of what could be the widespread emergence of ethical rules for security research.
"There has been a need for industry convergence around a code of conduct for releasing exploits," said Eddie Schwartz, chief operating officer for security services firm Guardent, a founding member of the group. "We are going to form an organization to help us deal with the vulnerabilities. Ultimately, we want to develop some standards for releasing these things."
The move, announced at Microsoft's Trusted Computing conference, had been widely expected.
In early October, Scott Culp, the manager of Microsoft's security response center, published an essay on the harm done by the unethical release of vulnerability information. In addition, the company had been using the Trusted Computing conference as a sounding board for several proposals, said several attendees.
Along with Microsoft and Guardent, security companies @Stake, Bindview, Foundstone and Internet Security Systems also supported the announcement, Schwartz said. The formal announcement for the group is expected within a month, and more partners would be added, he said.
"This is not just Microsoft initiative," Schwartz stressed. "We want the Ciscos and Suns and security vendors to join as well."
Vulnerability disclosure has been an emotional topic in the software security industry for some time. The latest announcement has already sparked controversy: Russ Cooper, a software security expert and editor of security mailing list "NTBugTraq," published his own guidelines for an independent security group, called the Responsible Disclosure Forum. Cooper boycotted Microsoft's conference largely because he distrusts the software giant's motives.
For the most part, however, Cooper and Microsoft agree on the problems that fully disclosing software flaws can create.
Gartner analyst John Pescatore says software vendors' attempts to restrict information on software vulnerabilities may reduce their embarrassment, but will also aid attackers and reduce security.
"You either participate in the Responsible Disclosure Forum, or you're a black hat bent on being malicious. End of story," he wrote in the introduction to the guidelines. "Too much money, too many individuals and too much of the world's communication rely on responsible disclosure for it to be continued to be seen as a discussion worth debating."
The Microsoft-supported guidelines tentatively give software makers 30 days to patch their products after being informed of a flaw. They also require members to respond promptly to a report of a security hole and keep the original author advised of their progress.
"This is something we talked about 11 months ago (at a previous security conference) and we have some real traction now," Microsoft's Culp said.
Not everyone agrees that full and open discussion of security issues is bad, however.
While he didn't oppose Microsoft's thrust to limit the release of programs that indiscriminately exploit software flaws, Matt Blaze, a security researcher at AT&T Labs, worried that the industry might overcompensate out of fear of being seen to support malicious hackers and online vandals.
"Since I do that (discover security vulnerabilities) for a living, I'm a bit concerned," he said. "As a researcher in this area, I depend on the open exchange of information."