Since the Electronic Entertainment Expo held in May this year we’ve heard increasing levels of concern about the capabilities of multi-cored processors to be included in soon to be released game consoles. Now PC hardware developers are raising concerns about the multi-cored processors included in desktop PCs. Software developers simply do not know how to take advantage of them is the claim!
Well known games developer identities such as Gabe Newel and John Carmack have been loud in expressing their concerns about multicore processor technology and for good reason. Graphics processor technology has already outstripped central processor technology and is left waiting for the train. The PowerPC based processors in upcoming games consoles are limited in their capacity to process data such as AI and physics fast enough to supply the graphics circuitry with what it is capable of handling. The learning curve for game developers who want to extract the full potential from the processors in these systems is very steep, and the concern is that by the time the systems can be fully exploited they will already have become redundant.
On the desktop PC side of the auditorium, hardware manufacturers such as Nvidia are beginning to express similar concerns. High end display cards for PCs have already gone well beyond the point where PC processors can keep up with them. All high end modern 3D graphics cards are limited by the CPU and there’s not much light on the horizon. Single core development has hit a brick wall in terms of sheer processing power, and the multi-core technology faces a similar hurdle to that faced by consoles. Developers simply do not know how to best make use of them.
Parallel processing techniques are far more easily applied to graphics processors than they are to CPUs because of quite fundamental architectural differences. Everything on a graphics processor is parallel from the get-go. More pixel pipelines, shading engines or whatever else can be added to make the things more powerful, and the basic concepts of programming for them makes the extra additions easy to program for. Central processing units, on the other hand, may have more cores added but the parallelism isn’t built in to their architecture or the fundamental concepts of programming for the things. Certain applications are ‘threaded’ by nature, but it is quite difficult to translate parallel algorithms into parallel threads for applications which aren’t already fundamentally multi-threaded. You can add more cores, but the parallelism gets lost in the programming, and the extra cores can end up with nothing to do!
Yes, that’s an obstacle which can be overcome, but just when can we expect that hurdle to be jumped? Commentators such as Nvidia’s David Kirk suggest that we are confronted with a crisis, because the training of programmers simply isn’t conducive to overcoming the obstacle. Rather than offering parallel programming techniques as post-graduate courses, as most universities currently do, the concepts should become a fundamental component of undergraduate courses. Currently, we are seeing graduates emerge who are not adequately prepared for the hardware they are about to program for.
Hardware commentary is exciting us. Consumers heading for multi-cored systems and a door to the future are turning from a trickle into a torrent. Programs designed to exploit the new technology will soon emerge, we are led to believe.
But will they?