But now a rapidly emerging technology called the multicore processor is fundamentally changing the way computers interact with software. This change is adding multiple layers of complexity to the once-simple per-CPU model and forcing software companies to re-evaluate the way they are licensing their products.
Without getting into technical details, a multicore processor essentially makes a single processor computer behave like a multiprocessor computer without taking up an additional socket (what was once called a CPU). The result is essentially more processing power.
From a licensing standpoint, the question then becomes: "If a multicore processor provides better performance, shouldn't the vendor then charge for more licenses? And if so, how much more?"
The disruption of multicore processing has created two camps of software vendors that are arguing about which approach is better: counting by core or counting by socket.
Despite the debate around these technical approaches to the problem, both are missing a far more important piece of the equation. It?s not that either argument is right or wrong; it's that neither takes into account the actual value that customers are deriving out of the software. In other words, both approaches are focused on licensing the environment in which the software is being run, as opposed to the value that the customer is deriving from that software.
Another argument that adds complexity to the discussion is: "What if the application doesn?t need the full power of the machine?"
If you have two applications running on one machine with eight CPUs, each application will typically be licensed as though it is running on an eight-CPU machine, as if it were the only thing running on those CPUs. So not only do these models assume the application is taking advantage of all the CPUs on the machine, it is not clear (and highly unlikely) that you are getting twice the value from a machine that has twice the CPUs. Even worse, if a vendor charges just for the number of CPUs used by the application, it is frequently administered through a painful, manual auditing process.
Now that the definition of CPUs has changed, the relationship between the value and the number of cores, CPUs and threads makes the ratio even less predictable. It is even harder, if not impossible, to quantify the value.
Value is best measured by very application-specific benchmarks, such as what features or capabilities get accessed or the number of users connected. The lure and strength of CPU-based licensing has been its simplicity. You don't have to count transactions, for example; it's just a single static number.
But now there are several major shifts that are shaking up the simplicity--and very definition--of CPU-based thinking, including single core versus multicore, hyper-threaded CPUs and symmetric multithreading, among others. These technologies essentially make a single CPU act as if they were multiple virtual CPUs.
The evolution of processing has placed a major strain on existing licensing models relying on CPUs and calls for other models to be available. As the landscape has evolved, there have been other alternatives--including floating, utility, node lock, subscription and pay-per-use pricing--so that vendors and enterprises can charge and pay for the value of the software regardless of the processing environment. As CPU-based licensing becomes more complicated, many software vendors are reevaluating their need for these alternative models.
If vendors do not look to these value-based models, customers will be confused about what they are paying for. They will get frustrated that their license agreements do not align with the value they are deriving. This ultimately gives competitive advantage to those companies that better understand the relationship between value and licensing.
Those competitors that are better meeting customer needs by offering more flexible and sensible licensing models will have a distinct advantage in the marketplace. I don't think software vendors want to be caught playing catch-up, do you?