Thank you for being a valued part of the CNET community. As of December 1, 2020, the forums are in read-only format. In early 2021, CNET Forums will no longer be available. We are grateful for the participation and advice you have provided to one another over the years.

Thanks,

CNET Support

General discussion

Graphics acceleration on second core of dual-core CPU?

Aug 26, 2006 5:10PM PDT

This is a general question, and I'd be surprised if I'm the first person to think of it. With the proliferation of muliple-core computers, and with decent clock speeds and large RAM support, did it not occur to anyone to utilize a secondary core of a multi-core computer system for graphics acceleration (instead of requiring a graphics card)--pretty much for the sole purpose of gamers? It seems (and perhaps this is where I'm wrong) that graphics processing would be much more efficient if it were done on the CPU by a secondary core through some integrated chipset on the MOBO. It's not like you do much multi-tasking while playing a game, and games are as of yet not multi-threaded (I think) so the second core is sitting there not doing anything. I mean, why bother with expensive GPU's if CPU's are much faster, and with computers available with gobs of RAM (~4GB) it seems easy that one core running at, say 2.4 GHz and using the second set of, say, 2 GB dual-channel ram, would yield a much better graphics processing situation than even the most top-of-the-line graphics card available, and it seems like it shouldn't be that hard. Perhaps I'm wrong, or perhaps it's merely due to pressures from ATI and NVIDIA to keep the graphics card venue going. But it seems like a reasonable next step. I mean, it would do away with the newer bulky PCI-E cards, possibly require less power, and would seem to be a cheaper alternative. Does anyone have any insight into this? I'm pretty interested to see if developers have considered this, or whether it's even feasible? Cheers.

Discussion is locked

- Collapse -
Dual, quad, and other combinations of cores will not, I
Aug 26, 2006 5:36PM PDT

think, be trying to take over the video card's job. No one wants to give either memory or processing power to video functions.

The multi processors will be trying to handle lots of image, audio and video tasks. Those are far more important than games.

- Collapse -
Maybe for most light users, but what about the hardcore game
Aug 27, 2006 1:04AM PDT

I appreciate the respons and I completely realize that for the most part, gaming is not necessarily a priority for most users--and therefore, why bother integrating it if, say, only ~10% of users would even benefit from sucha thing? But what about the smaller population of gamers? Do you think that the execution of such a thing is impractical? I can understand that for multi-tasking on a multi-core computer, sure, you wouldn't typically want that bogging down your system. But for single-threaded games, why not? I just don't see what's stopping this. Any insight as to the possible hardware limitations and why a dedicated graphics card would be more preferred than a full-fledged CPU doing the graphics processing while the other runs a game? Because I can't think of any reason why not. Thanks again.

- Collapse -
Who said anything about 'light users'?
Aug 27, 2006 4:40AM PDT

Multi-core systems will probably not be aimed at 'light users'. I think I mentioned the digital media areas including photography, video, and audio. This processing, particularly video and even more particularly high definition video, is not 'light use'. It is VERY cpu intensive work. Since heavy duty video cards already handle hard core gamers, there is no need to load the cpu with the same work. Those who need intensive video card type support, will continue to look to the card where the hardware can be optimized to the job. Those who do not need intensive display support, will not want systems which bog down the cpu with those functions.

Video cards which steal main memory have been tried in the past. They failed. Again, most people do not want to share main memory with the video card. Video memory is best located on the video card where it can be optimized, and where it will not interfere with the work that the cpu is doing.

Multi-core is aimed at offering more processing power where a brick wall has been hit in the effort to increase chip speeds. The idea is to obtain the speed without consuming ever larger amounts of power which creates heating problems. Designers will not want to divert that power to unnecessary video support.

- Collapse -
I understood what you were saying...
Aug 27, 2006 10:59AM PDT

I think I didn't communicate properly--I meant something along the lines of building a "mostly gaming, not so much anything else" computer (which, I guess, is pretty much an XBOX or PS3). I was mainly trying to emphasize gaming, which, for now, is single threaded/single-core dependant. I mean, it's going to get to the point where (maybe) a graphics card GPU is as fast as a typical CPU, and comes standard with as much ram as well (well, likely computer system hardware will advance too to the point where graphics cards won't be literally equal, but it's getting close right now...)--A few Nvidia and ATI cards are already reaching 1.4 GHz GPU Speeds, with 1 GB ram--that's more than a lot of people's CPU/system memory. I think my argument is contingent upon most applications being single-threaded, and people not really multi-tasking when playing a game or doing video editing (which isn't true, I understand that). Once multiple-threaded applications start to become common, my idea is likely useless due to efficient usage of multiple cores on a single application. It's just that I don't understand why some graphics cards are running over a grand when it seems like you could use main CPU's and system RAM to build a faster graphics "card." I'm not sure why it's not going that direction.

Regarding integrated video cards sharing system memory--this has only failed because systems that use an integrated video card that shares system ram typically lack sufficient amount of system ram and bandwith for this purpose--I'm thinking in terms of Large sets of dual-channel RAM, which the graphics chpiset can allocate in large, say, 1 GB blocks for video processing (say, if a computer has 4 GB dual-channel system RAM, during gaming, why can't 2 GB be devoted to running the system/game, and the other two be devoted to graphics processing--just like using a secondary core). I totally understand why integrated graphics failed before--due to poor hardware configurations, and I'm not about to jump ship on the dedicated video card idea, don't get me wrong, I just don't see why, for now with single-threaded games, this idea wasn't implemented. Probably because no one in their right mind thought with multi-core cpu's that things would remain single-threadded. I guess it might get to the point where a video card will be similar to a motherboard--you can "trade up" GPU and RAM for future upgrades. Nonetheless, thanks for the response and the discussion.

- Collapse -
Many applications are already multithreaded. If your
Aug 27, 2006 2:37PM PDT

approach assumes single threading, that world is already gone. Photoshop, as I recall, is already multithreaded. So is Sony Vegas which even supports multiple computers for parceling out rendering tasks. Especially, as multicore computers come on the market, more and more applications will take advantage of the hardware.

As far as sharing main memory, I don't see it happening. Systems are always struggling for enough memory, and that will not change. As more work gets done, more memory will be needed. Also, dynamically sharing memory would probably require hardware and OS changes to implement. I simply don't see that happening. Perhaps, on a dedicated gaming machine the things you want will someday be implemented.

Good luck.

- Collapse -
How far down the rabbit hole?
Aug 27, 2006 1:20AM PDT

This was noted and documented at Intel over a decade again. So the really short answer is yes, it's been written up.

The reason it's not going to work is that many graphic operations are best done in hardware. But I don't want to dive into how 3D graphics cards work. You need to google more and see what turns up.

A background in electronics and CPU design would help too.

Bob