IDF 2006: Terascale Processing Brings 80 Cores to your Desktop

Status
Not open for further replies.

Osiris

Golden Master
Messages
36,817
Location
Kentucky
IDF 2006: Terascale Processing Brings 80 Cores to your Desktop

If you’ve never heard the term “terascale” before reading this article, you aren’t alone. Before attending this Fall’s IDF, I hadn’t been introduced to the term either. But after hearing and reading about it and doing a lot of research into the technology, I can tell you that we are going to walk away from this technology overview excited about the future of computing.

The basic premise of Terascale computing is being able to work on terabytes of data on a single machine which would require teraflops of processor power. (Terabyte is approximately 1024 GBs and a Teraflop is approximately 1000 Megaflops.)

tera3-01.jpg


This slide from one of IntelÂ’s presentations shows the progression from single data and single core processors, through the era of multi-core processors (which we are in now) and into the world of processors with many more cores on them than even the quad-core Intel parts announced this week.

tera5-01.jpg


One of the main reasons for a move to multi-core products is the fact that both Intel and other chip designers have been increasing the amount of transistors that we can fit in any given area but havenÂ’t alleviated the problems of power and heat. The Gigahertz war ended poorly for Intel, as they had to revert backwards a bit and redesign their CPUs from the ground up with this knowledge. But, MooreÂ’s Law still applies, and by 2011 Intel estimates weÂ’ll be seeing chips with over 32 billion transistors on them! But if we canÂ’t increase the power of a single core and cranking up the frequency, what can we do with all those transistor resources? The answer: more cores. Many more.

For our discussions here, the term “terascale” will refer to a processor with 32 or more cores. Moving away from the “large” cores seen in the Core 2 Duo and Athlon 64 lines from Intel and AMD, the cores in a terascale processor will be much simpler (kind of like we are seeing in the Cell processor design). These cores will be low power and probably based on a past-generation Intel architecture that has been refined and perfected. These cores can provide 4-5x higher performance/watt efficiency and will scale beyond the limits we have in current generation instruction level parallelism.

You should not think of this merely as SMP on a single die; these are vastly different cores with new platform requirements and software requirements. How so? How about memory bandwidth needs of 1.2 Terabytes/s compared to the 12 GB/s in current SMP systems? And what about latency levels of only 20 cycles for a terascale core compared to 400 cycles on a modern SMP system? Now you see the scales we are talking in here and the significance (and hurdles) these designs introduce.


Here
 
That's mind blowing. My vocabulary is obviously no where near develeped as yours is haha. But from what I can understand, that's going to be insane. Nice post.

What I found interesting was Intel going back and using old architecture for it. And memory bandwhidth of 1.2 Terabytes per second!?

Would it be possible for CPU's, well future CPU's that are out when this comes out, to be able to process that much information that fast and be able to fill up that bandwdith?
 
Status
Not open for further replies.
Back
Top Bottom