Since the Electronic Entertainment Expo held in May this year we’ve heard increasing levels of concern about the capabilities of multi-cored processors to be included in soon to be released game consoles. Now PC hardware developers are raising concerns about the multi-cored processors included in desktop PCs. Software developers simply do not know how to take advantage of them is the claim!

Well known games developer identities such as Gabe Newel and John Carmack have been loud in expressing their concerns about multicore processor technology and for good reason. Graphics processor technology has already outstripped central processor technology and is left waiting for the train. The PowerPC based processors in upcoming games consoles are limited in their capacity to process data such as AI and physics fast enough to supply the graphics circuitry with what it is capable of handling. The learning curve for game developers who want to extract the full potential from the processors in these systems is very steep, and the concern is that by the time the systems can be fully exploited they will already have become redundant.

On the desktop PC side of the auditorium, hardware manufacturers such as Nvidia are beginning to express similar concerns. High end display cards for PCs have already gone well beyond the point where PC processors can keep up with them. All high end modern 3D graphics cards are limited by the CPU and there’s not much light on the horizon. Single core development has hit a brick wall in terms of sheer processing power, and the multi-core technology faces a similar hurdle to that faced by consoles. Developers simply do not know how to best make use of them.

Parallel processing techniques are far more easily applied to graphics processors than they are to CPUs because of quite fundamental architectural differences. Everything on a graphics processor is parallel from the get-go. More pixel pipelines, shading engines or whatever else can be added to make the things more powerful, and the basic concepts of programming for them makes the extra additions easy to program for. Central processing units, on the other hand, may have more cores added but the parallelism isn’t built in to their architecture or the fundamental concepts of programming for the things. Certain applications are ‘threaded’ by nature, but it is quite difficult to translate parallel algorithms into parallel threads for applications which aren’t already fundamentally multi-threaded. You can add more cores, but the parallelism gets lost in the programming, and the extra cores can end up with nothing to do!

Yes, that’s an obstacle which can be overcome, but just when can we expect that hurdle to be jumped? Commentators such as Nvidia’s David Kirk suggest that we are confronted with a crisis, because the training of programmers simply isn’t conducive to overcoming the obstacle. Rather than offering parallel programming techniques as post-graduate courses, as most universities currently do, the concepts should become a fundamental component of undergraduate courses. Currently, we are seeing graduates emerge who are not adequately prepared for the hardware they are about to program for.

Hardware commentary is exciting us. Consumers heading for multi-cored systems and a door to the future are turning from a trickle into a torrent. Programs designed to exploit the new technology will soon emerge, we are led to believe.

But will they?

Recommended Answers

All 5 Replies

Of course this shows only one side in the story.
The real drive behind multicore processors is NOT the games industry (though gamers, being tech junks almost by nature, will pick them up and then complain they're not giving the magnificent performance boosts they expected. "I'm not getting twice the fps, XXX sux" will be a widespread complaint).
The real power will come first from large scientific and financial applications, maybe CAD applications, in general things that are often run on multi-CPU machines today.
And of course servers. With multicore CPUs the Intel line gets into direct competition with a larger section of the highend machines like RS6000s.

It's only intended to depict one side of the story, jwenting. The article specifically mentions and is discussing desktop PCs. That is the market toward which dual-core processors are currently being promoted. Ordinary, everyday desktop PCs.

For workstation and server applications, dual and multi-core processors are already relevent, and serve the same purpose as dual and multi processor systems have done for quite some time, as you have said.

Multi-core processors have virtually no applicability to gaming systems. They currently have only very marginal applicability to desktop PCs used for everyday applications use. There's no doubt that they have the potential to impact greatly in that area, and the concerns held are that the necessary development in programming skills is not happening quickly enough.

This is nonsense. Sure, CPUs are being surpassed by GPUs, but I don't see how this is a cause for "concern." Is there really any problem with having more processing power than game developers know how to use? Those that can use it will, and those that can't won't. This will in no way hurt the games. If we were to stick with single core chips then those few that did know how to use the chips wouldn't be able to, and everything else would be the same. I don't see what the problem with adding dual core chips is.

And your assertion that, "single core development has hit a brick wall in terms of sheer processing power," is just false. It may be that GPU's are being developed faster than single core CPUs, but moore's law continues to apply, and CPU speed continues to increase. The chip makers have no hit any brick wall, and I'm not sure where you got the idea that they did.

Also, dual core chips clearly have applications outside of the gaming industry. They can really be applied in just about any task, if the developer knows how to use them. Now it may be the case that not all developers are familiar enough with them yet, but some surely are, and more will be in the future.

And of course even if the game programmers are too stupid to learn how to use multi-core CPUs (what the videocard makers are effectively implying, a massive insult to some of the best minds in the programming business...) the operating system programmers (especially those at Microsoft, the rest is irrelevant to the games industry) certainly DO know.
They'll just program OSs to put that game in one core while the OS merrily steams along in another, thus giving both more room than they have now and increasing performance.

I suppose what this article is about is that graphic cards have become too efficient. Games that played quite well with older and somewhat slower models of the cards look like crap on these new cards. A couple months ago my monitor died so I bought a new flat-screen monitor and a new latest-and-greatest graphics card. My favorite game that played quite well on the older card is very jerky on the new one -- the character no longer runs smoothly across the screen. It might be that problem that this article is complaining about. I suspect game programmers are having a really difficult time keeping the game flowing as smoothly as they would like.

Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.