X

Intel vs. Nvidia: The tech behind the legal case

The FTC has been looking into whether Intel engaged in anticompetitive behavior against Nvidia. An expert weighs in on the tech behind the case.

Brooke Crothers Former CNET contributor
Brooke Crothers writes about mobile computer systems, including laptops, tablets, smartphones: how they define the computing experience and the hardware that makes them tick. He has served as an editor at large at CNET News and a contributing reporter to The New York Times' Bits and Technology sections. His interest in things small began when living in Tokyo in a very small apartment for a very long time.
Brooke Crothers
8 min read

The graphics chip has become one of the big legal battlegrounds for Intel.

To get a better understanding of what all of the legal wrangling is about, I asked an expert to describe the technology underlying the court battle between Intel and the world's largest purveyor of standalone graphics chips, Nvidia.

To date, the antitrust actions against Intel have focused on the sales practices for central processing units, or CPUs, an area where Intel and Advanced Micro Devices have been skirmishing for decades. In December, however, the Federal Trade Commission, in effect, inserted itself into the legal wrangling between Intel and Nvidia when it alleged in a complaint that Intel was engaged in anticompetitive practices in the graphics chip market.

Intel and the FTC are currently trying to negotiate a settlement, with a deadline of July 22. If they don't reach an agreement, the FTC case against Intel will go to trial, slated to begin on Sept. 15. The suit (Intel) and countersuit (Nvidia) are expected to be addressed in some form if there is a settlement, in addition to the longstanding AMD issues.

Nvidia, the Santa Clara, Calif.-based neighbor of Intel, is the world's leading supplier of "discrete," or standalone, graphics chips but takes a distant second place in overall market share to Intel, which supplies "integrated" graphics built into the chipsets that accompany all of its processors.

One of the core contemporaneous issues in the legal squabbling is Intel's Nehalem design. (Nehalem is Intel's latest chip architecture and includes processors such as the Core i3, i5, and i7.) With the introduction of the Nehalem chip architecture, Intel has asserted, via court filings, that Nvidia, in effect, does not have the right to attach chipsets to Intel CPUs anymore--locking Nvidia out of a potentially large market. (Intel claims it has the legal right to do so because the technology has changed.) Before Nehalem-based chip designs emerged, Nvidia had supplied chipsets for Apple's MacBook, MacBook Air, and MacBook Pro, for example. Now, it is prevented from doing so.

And the dynamics of the market are changing quickly as Intel yanks the graphics function out of the chipset (which is a separate piece--or pieces--of silicon) and moves it onto the CPU itself. In other words, what used to be a CPU is now, for Intel, the functional equivalent of both a CPU and GPU, or graphics processing unit.

Via an e-mail exchange, I asked David Kanter about the technology behind the case. Kanter is an editor and analyst at Real World Technologies, which covers chip technology in depth.

The key technological issues in the case are the connection technologies. Can you describe them?
The front-side bus (FSB) is an Intel bus technology that connects one or more CPUs to the memory controller. For almost all Intel CPUs up to the Penryn (Core 2) generation, the FSB connects the CPU(s) to the chipset (which contains the memory controller). The FSB has been licensed out to many companies including chipset partners like Nvidia, ATI, and accelerator vendors like Nallatech (which puts a field programmable gate array, or FPGA, into the CPU socket for scientific computing). The FSB architecture generally restricts the system to use a single memory controller and dates back to the Pentium Pro and is a low-cost optimized, shared bus. The FSB is also used by the older "Diamondville" and "Silverthorne" Atom processors.

And then there's the Quick Path Interconnect (previously known as Common System Interface or CSI), an Intel interconnect technology that connects one or more CPUs, memory controllers and other chips in the system. The other chips in the system could include a third party memory controller, a chipset for I/O, etc. One of the main points of QPI is that it is designed for the system to have multiple memory controllers. This is one of the major selling points of the Nehalem generation for servers (and one of AMD's big advantages over Intel in that space previously). QPI is licensed out to select server partners for very specific applications (e.g., Dell, IBM, HP), such as building larger 16 or 32 socket (processor) servers. QPI is similar to (AMD's) Hypertransport and is optimized for higher performance (but higher cost) than the FSB. It is a point-to-point interconnect (as opposed to a shared bus like the FSB). QPI is used in Nehalem and newer generations of x86 CPUs. It is also used for Intel's Tukwila and newer Itanium processors.

Finally there is the Direct Media Interface (DMI), an Intel interconnect technology that connects between the Northbridge and Southbridge (two parts) of a chipset. It cannot be used to connect between a memory controller and a CPU (or between 2+ CPUs) because it is not "coherent." DMI is very similar to PCI-Express, but has some subtle differences. DMI is used in both older (pre-Nehalem) and newer (post-Nehalem) chipsets. DMI has not (to my knowledge) been used outside of Intel. In some cases the northbridge is integrated into the same chip or package as the CPU (e.g., newer Core i3 and i5 products).

Isn't Intel phasing out the front-side bus?
Yes, Intel is phasing out the front-side bus and moving pretty everything to some combination of QPI and DMI, depending on the system. For many low-end systems, the CPU and the northbridge/memory controller are integrated in the same package or on the same piece of silicon. This CPU package or chip then connects to the chipset via DMI. The chipset talks to the rest of the world (e.g. keyboard, discrete graphics, sound, networking, etc.).

Intel is following the general trends of integration, putting more and more functions into a single chip. They are also very tightly following platform integration, where they provide the CPU and any chipset components together. AMD is also taking the same platform approach, which was one of the benefits of the ATI acquisition.

Is this just Intel? Doesn't AMD have a similar strategy?
Both Intel and AMD are discouraging Nvidia (and some other companies) from providing third party chipsets, because they both want to increase their share of the PC system dollars. This generally applies to both Intel's mainstream products and Atom. In some cases, OEMs also prefer to have "one throat choke" when something goes wrong. There are more benefits to both Intel and AMD here, since the interface between the CPU and chipset is controlled internally, it can be a tad more flexible, rather than rigidly following a standard. That can make development and fixing bugs easier in some cases.

Intel has already integrated a 32-nanometer CPU and a 45-nanometer (nm) GPU/chipset into a single package (using two chips) for the Clarkdale and Arrandale (Core i3, i5 and some i7) processors. Intel's Sandy Bridge (expected later this year) will integrate the CPU, GPU and chipset into a single 32nm chip. Similarly, AMD is betting their future on Fusion products, which integrate a CPU, GPU and chipset onto a single chip. AMD's first two Fusion products are expected next year using 32nm (Llano) and 40nm (Bobcat) manufacturing.

So, how does all of this affect the Intel-Nvidia legal wrangling?
When these products debut, there's no reason to buy a separate chipset from a third party (such as Nvidia), since it would just duplicate what Intel/AMD already include and add cost and increase power consumption. Instead, if an OEM wants better graphics performance (the selling point for Nvidia's chipsets), it makes more sense to simply use a discrete GPU from Nvidia or AMD and connect it via PCI-Express.

Everyone agrees that Nvidia and Intel have/had a 2004 cross license agreement for Nvidia to use the FSB for chipsets in the PC (there was also an earlier cross license agreement covering the original Xbox). Two Nvidia products based on this cross-license are the nForce chipset (for mainstream CPUs) and the Ion chipset (for Atom). The main advantage of an Nvidia chipset is that the integrated graphics are better than Intel's integrated graphics. At the time of the 2004 cross license agreement, the FSB and DMI both existed, and QPI was under development at Intel. Everyone agrees that the cross license agreement covers the FSB. However, there are at least two additional questions:

1. Does the cross license also cover DMI?
2. Does the cross license also cover future technologies (e.g. CSI)?

(Also, listen to Nvidia's position on the cross license and the technologies it licensed to Intel.)

Intel's position is no, on both accounts. Nvidia's position is yes. The answers are unclear, but probably reside in the actual 2004 cross license agreement (which is not publicly available).

Cross license agreements tend to have a number of restrictions to limit the scope. For example, Intel and AMD had a cross license agreement that required that AMD manufacture most of their CPUs internally (that restriction was recently changed). Similarly, Intel has licensed out QPI to a number of OEMs for specific applications, e.g., SGI and IBM build custom chipsets for large servers using Nehalem-EX. Those licensing agreements are not going to be unlimited, and probably prohibit the OEMs from building custom chipsets for the desktop or notebook.

So, what is the crux of the matter?
So the real question is whether the original licensing agreement between Intel and Nvidia contained these sorts of restrictions. Both Intel and Nvidia have enough money to hire very good lawyers, and I'd assume that there were some sort of restrictions in place and what we are seeing is squabbling over interpretation of the restrictions.

What about the FTC case specifically?
The FTC case is slightly different. Generally, companies are free to license (or not license) their IP to third parties. The two major exceptions I can think of are the IBM and Microsoft consent decrees and antitrust settlements.

Nvidia may be hoping that the FTC shows that Intel is a monopoly and has abused its position, and that the FTC forces Intel to license out the technology that NV wants. However, that will probably only occur if the case actually went to trial and the FTC won. If they settle the case, by definition the FTC and Intel will reach a compromise of some sort with each side getting some of what they want. It seems likely that Intel will strongly resist any requirement to license out IP to third parties.

From a business perspective though, the legal questions (in either case) may be somewhat academic. Court cases take a long time to resolve. And Moore's Law strongly encourages integrating the CPU, GPU and chipset together for economic reasons. Once Intel has integrated those pieces into the same package or chip, a license for DMI may not be very helpful.