X

Beating the software blues

TeaLeaf Technology founder Robert Wenig says application quality continues to plague the computer industry--this despite all the money invested in monitoring systems and enhancing their performance.

4 min read
Hollywood's "The Matrix" has achieved systems monitoring nirvana. The intelligent machine architects of this most intricate of networks have developed programs--uber-sophisticated scripts--that constantly police and test the system's performance. This matrix never goes down, and the diagnostic capabilities of the monitoring agents render the network self-aware and infallible.

We are all still awaiting the final conclusion of the trilogy, but it is obvious that the idea of this matrix's infallible monitoring system has a fatal flaw, one that undermines its entire purpose. Is there a real-world IT lesson to be learned here?

In creating the technology world's foremost romantic screen epic, the Wachowski brothers have hit on an important point that the IT community has failed to grasp: No matter how sophisticated a system is, no technology can accurately predict, emulate or control human behavior.

Ultimately, the performance of an application is not judged by the elegance of its supporting infrastructure, but by the functionality it provides to its users. Any application that neglects to hold this measure above all others is doomed to fail.

Today, we have created highly sophisticated performance-monitoring systems that tell us that our systems are up 99.999 percent of the time. Applications are running, pages are loading and the lights are on. So why is it businesses are spending more money than ever on help desk resources and support costs? With all the money we've invested in monitoring our systems and enhancing their performance, why is application quality an issue that still plagues us?

Because, like the architects of the matrix, we are not omniscient--and neither are our monitoring solutions. In our quest to innovate, to architect, we've forgotten that the heart and purpose of the complex networks we've built is not to hum along in perpetuity, but to serve as tools, tools that help human users accomplish certain tasks.

Agents ignore the function in pursuit of the form. The reality is that while system uptime metrics are necessary to immediately inform IT of a system interruption, they fall substantially short in measuring whether an application is performing as designed.

If monitoring systems report that things are running well, servers are up, Web pages are loading quickly--it's a good day! The limitations of this solipsistic approach to system performance are revealed when we take into account the user who's left glaring at a blank page--a page that today's system performance solutions report is loading jiffy quick.

No matter how sophisticated a system is, no technology can accurately predict, emulate or control human behavior.
In order for IT to deliver on its promises for leaps in productivity, cost-savings and competitive advantage, we must begin measuring the performance of a system from the perspective of the single constant and the final arbiter of success--the ability of a real user to conduct business.

Software testing solutions, for example, are designed to remove the potential for failure before an application is implemented. Extensive testing is done preproduction, as synthetic scripts are monitored and updated to try to accurately emulate the dynamic nature of distributed interconnected applications.

In essence, synthetic scripts are designed to predict and anticipate end-user behavior, essentially predicting cause and effect. However, eventually, every application leaves the sanctity of Q&A and enters the messy world of implementation.

Once an application is implemented, the test scripts we relied upon to police the integrity of the program fail us. They are incapable of showing us what our users are actually doing, or how our applications are functioning in response. "The Matrix" paints an exaggerated picture of the mayhem that can happen when the controlled laboratory is left behind and actual users have at the product.

In our quest to innovate, to architect, we've forgotten that the heart and purpose of the complex networks we've built is not to hum along in perpetuity.
The achievement of cheap and plentiful processing power and more robust storage capabilities has led to the advent of technology that has made it possible to correlate an entire session into a single, reviewable file. This means that it is entirely possible--and simple--to understand the user-application interaction.

However, before we experience mass implementation of this new monitoring paradigm, what we need is a fundamental shift in perspective. IT, as an industry, must stop focusing on this idea of a matrix--the bits, bytes and components--and begin taking a closer look at where the application faces its most important performance test: at the browser, where a real person sits trying to conduct business, but all too often fails to do so.