Codenomicon CTO discusses tackling vulnerabilities
This week, Robert Vamosi speaks with Ari Takanen, co-founder and CTO of Codenomicon, about vulnerabilities and independent security researchers.
This week, I had a chance to talk by phone with Ari Takanen, co-founder and CTO of Codenomicon. Takanen's company doesn't engage in vulnerability research but instead creates the tools by which enterprises can check their own software for vulnerabilities.
Which raises a question. On previous shows I've interviewed independent researchers who, outside of a given company, have identified and made public serious vulnerabilities. One would think an independent voice might be better than one located inside a company.
Below is a transcript of part of my interview. The entire podcast can be heard here.
Q: What do you think of the independent security researchers--I know you designed systems for enterprises to look at their own software--but what about the independents who are out there? What do you think of the disclosure process as it stands--the 90-day window that they often give or sometimes don't give?
Ari Takanen: I think the worst thing in this market is that many of the enterprises, many of the manufacturers actually, don't yet understand the value of security problems. So, for example, we've been looking at this topic for ages, ever since we started in 1996. Back then, vulnerability disclosure was one of our favorite topics. I've been writing academic articles on the topic ever since. So, one of the problems is that if people don't understand the value of a security issue, they don't really want to fix it either. If someone, even one of their customers, finds a security problem and they try to report it to the developers, those developers are so consumed by hundreds and hundreds of other issues that they need to fix. If they don't know how to prioritize those security problems it's a just a mission impossible for anyone to actually get anything fixed.
So, what happened as a result of that was this public disclosure movement which I'm not a big fan of. It meant that because no one was motivated in doing anything unless there was public pressure, people just publicly reported the problems, and created the pressure for them to actually create a fix. What is happening, at least the way I see it, is that many people already understand what is a security issue. So if you report an issue, they will fix it. Whether it needs to be public is something that people are really arguing about.
It is good for independent researchers, because the only thing they get from it is publicity. So, why not give them publicity? On the other hand, when people start to understand the value of problems as well, you get these bug hunters. It's a completely new trade, basically, where people find problems and they get paid for their service per discovery and those problems are fixed and not necessarily ever disclosed to the public again. So, its kind of like a new area of security resource: when you get paid by findings that you do instead of paid by time you used for security analysis. I'm not sure whether the actual public disclosure is a such a good idea because I don't see any benefit from that anymore. Unless you don't get someone to fix a product.