X

Who says biology need be destiny?

Sana Security founder Steven Hofmeyr says parallels between computer security and human immunology threaten to take the industry down paths that may prove to be dead ends.

5 min read
As humans, we like turning to biology for inspiration, when we are faced with hard technological problems.

For example, the Wright brothers studied the flight of birds in designing planes with flexible, twisting wings and an aerofoil shape to provide lift.

Today, researchers seeking answers to the technological issue of securing computer networks are emulating the Wrights in turning to nature for solutions.

Current security is inadequate, as evinced by the rash of worms, viruses and others forms of attacks that have plagued computer networks over the last few years. But the human immune system faces an even tougher problem in protecting the body against a bewildering array of diseases and micro-organisms. And our immune system has to secure an environment vastly more complex than all computer networks put together.

So the temptation to draw parallels between computer security and human immunology is understandable and, at times, worthwhile and inspiring. But taken too far, it can also lead us down paths that may prove to be dead ends.

The deeper the parallels between biology and technology, the more important it is to map the corresponding elements correctly.
First, we need to look at the three levels at which we can draw parallels between biology and technology: buzzwords, principles and mechanisms.

At the shallowest level, drawing biological parallels can result in buzzwords or marketing hype with little relation to reality. For instance, Cisco Systems is developing a "Self-Defending Network" that it touts as operating "much in the same way the human body's immune system keeps us healthy, while preventing and fighting infections on a daily basis," CEO John Chambers said recently. There are no actual similarities between the immune system and the "Self-Defending Network" other than the fact that they both fight "viruses."

At a deeper, more meaningful level, we can isolate principles behind the functioning of the biological system and use those as guidelines for designing security systems. For example, the immune system is autonomous, requiring no centralized control. It's also robust, meaning that it rebounds from local failures; and adaptable--able to function effectively in dynamic, noisy environments. In addition, the immune system relies on diversity to improve resistance within the body and across populations.

The security industry has grappled with the diversity principle. Many have commented that Microsoft's dominance of the software industry has led to a "software monoculture," that exacerbates the security problem, because one vulnerability can affect the vast majority of systems in use.

This idea of diversity was first investigated in 1997 by Stephanie Forrest and Anil Somayaji at the University of New Mexico, who studied methods for randomizing compiled code so that each version of a program would require a different methodology to attack.

Some of what they learned about birds could be applied to airplane wing design, but the Wrights also recognized what had to be discarded.
Similar concepts are being used commercially by Cloakware, which has developed a software transformation method that allows a user to generate multiple versions of an application that are functionally identical but structurally different.

When we draw parallels at the deepest level, we try to mimic mechanisms present in the biological system to solve problems security systems face. For example, security systems are plagued by excessive false positives, which occur when acceptable behavior is incorrectly identified as malicious, resulting in false alarms, or worse, blocking legitimate behavior. E-mails that don't get through because of spam filters are one instance of this.

Interestingly, the human immune system suffers from a similar problem of excessive false positives. Reaction to false positives is the basis of the various autoimmune disorders, such as multiple sclerosis and lupus, but these disorders are relatively rare.


Get Up to Speed on...
Enterprise security
Get the latest headlines and
company-specific news in our
expanded GUTS section.


Generally, the immune system overcomes the false-positive problem through "costimulation," in which an immune system cell requires two signals to be activated.

The first signal is generated when the cell detects something anomalous (and binds to it), and the second signal is generated when there is damage to the body. Consequently, the immune system only mounts a reaction when it detects something unusual that has damaged the body, and the strength of the reaction is proportional to the damage.

We can apply this idea to computer security by looking for damage indicators in technology systems. If we are concerned about stopping a worm that causes computers to crash, then we could have a system that looks for abnormal network behavior and reacts only when that behavior is correlated with computers crashing. This would not protect all computers from crashing, but it could protect enough of them to be a viable solution on an enterprise level.

This idea is similar to IBM's Digital Immune System for Cyberspace, which was also inspired by a study of immunology. In the IBM system, each machine looks for signs of infections and reports them to a central virus analysis engine, which develops a fix and distributes it to other machines. Effectively, the system is using damage signaling to validate detected abnormalities.

Clearly, the deeper the parallels between biology and technology, the more important it is to map the corresponding elements correctly. The issue of confidentiality illustrates how such parallels can lead us astray. The immune system has not evolved to ensure confidentiality, and yet that is a key aspect of computer security. In this case, a study of the immune system does not give us any ideas on how to detect damage caused by the exposure of confidential data.

Furthermore, if we do not understand the biological system properly, what hope do we have of using it as a model for technological systems? To return to the Wright brothers, some of what they learned about birds could be applied to airplane wing design, but the Wrights also recognized what had to be discarded. An airplane needed fixed wings, not flapping wings that mimic the way birds fly. Historical attempts to build "ornithopters" were dismal failures. Even today, the aerodynamics of flapping wings is not fully understood.

The same cautionary tale applies to all uses of biological parallels. We should not slavishly adhere to the parallel but instead use it when and where it has power. The closer the correspondence between the biological and artificial systems, the more powerful the analogy becomes.

We need to understand this correspondence more fully and, where it falls short, consider redesigning the systems we are trying to protect to be more biological in nature. By following this principle, we can move toward the goal of having autonomous, adaptive, self-healing information systems that protect themselves.