US Air Force uses PS3s for Air Defense

What Would Crunch Faster?

  • 1700 PS3s + server blades

    Votes: 0 0.0%
  • Hex/Octo Xeons + Tesla

    Votes: 0 0.0%

  • Total voters
    0
Status
Not open for further replies.

PP Mguire

Build Guru
Messages
32,591
Location
Fort Worth, Texas
So, according to this article, im about to link, the Air Force uses 1,760 PS3s and a server cluster to make a super computer to calculate for air defense. Supposedly they had a 2 mil budget on this....and they chose to do this :wtf:

Air Force Unveils Fastest Defense Supercomputer, Made of 1,760 PlayStation 3s | Popular Science


Heres the point of this thread. To all you calculation heads, what would you think would be better with a 2 million dollar budget? Buy hex Xeons coupled with Tesla cards to compute or all these PS3s? Maybe even octo Xeons. IMO i think Xeons and Teslas would crunch faster than all those PS3s.
 
Finnally, a good use for PS3's :p J/K


But seriously, a new PS3 is only $300 and you can get used ones for $250 at gamestop. Newegg's cheapest Hex Xeon CPU is about $1000. So what would be faster, four Cell Processors or one hex Xeon CPU?
 
So, according to this article, im about to link, the Air Force uses 1,760 PS3s and a server cluster to make a super computer to calculate for air defense. Supposedly they had a 2 mil budget on this....and they chose to do this :wtf:

Air Force Unveils Fastest Defense Supercomputer, Made of 1,760 PlayStation 3s | Popular Science


Heres the point of this thread. To all you calculation heads, what would you think would be better with a 2 million dollar budget? Buy hex Xeons coupled with Tesla cards to compute or all these PS3s? Maybe even octo Xeons. IMO i think Xeons and Teslas would crunch faster than all those PS3s.

It's not that simple. A lot of times having the fasest individual nodes isn't the most effective route when building a supercomputer.

Also Tesla is way way overrated. My Computer Design professor, who is very active in the HPC community and just finished building a 64 node Geforce based supercomputer, basically told the class that all of the quoted performance numbers for gpu's are completely meaningless. In fact sometime next year a extremely optimized CUDA compiler for x86 processors is supposed to come out that will cut Nvidia's claimed ~300x performance improvement over cpu's down to something like a 2x lead. And to top it off that lead comes at the expense of working with a programming model that I've heard described as "****ishly evil" to work with.

More to the point Cell is actually extremely fast when running properly optimized code and since a PS3 uses relatively little power when compared to PC's I'm guessing they may have picked it for performance/power consumption reasons. Believe it or not power usage is a huge concern in the HPC world.
 
And with the PS3's you don't have to buy the motherboards, RAM, etc. which would also increase the cost of Xeon even more.
 
And with the PS3's you don't have to buy the motherboards, RAM, etc. which would also increase the cost of Xeon even more.

I'm not sure if it would even be viable to build a cluster that large form DIY systems. Even ignoring the initial build time when you are talking about several hundred or more systems you are going to run into hardware failures on a shockingly frequent basis and trying to manage that in house would take a lot of manpower. It would make a lot more sense to go with Dell or HP servers an let them handle all of the support even if it means increasing the cost per system by a good ammount.
 
It's not that simple. A lot of times having the fasest individual nodes isn't the most effective route when building a supercomputer.

Also Tesla is way way overrated. My Computer Design professor, who is very active in the HPC community and just finished building a 64 node Geforce based supercomputer, basically told the class that all of the quoted performance numbers for gpu's are completely meaningless. In fact sometime next year a extremely optimized CUDA compiler for x86 processors is supposed to come out that will cut Nvidia's claimed ~300x performance improvement over cpu's down to something like a 2x lead. And to top it off that lead comes at the expense of working with a programming model that I've heard described as "****ishly evil" to work with.

More to the point Cell is actually extremely fast when running properly optimized code and since a PS3 uses relatively little power when compared to PC's I'm guessing they may have picked it for performance/power consumption reasons. Believe it or not power usage is a huge concern in the HPC world.
Nvidia GPUs when propery used makes even the baddest of CPUs scream for mercy...i fail to see you or your professors logic. By properly used i mean GPGPU, benchmarking, or gaming which includes proper coding for each.

Simple case, a single westmere Xeon has 6 cores + HT which = 12 threads. A single standard PS3 Core Cell has 7 cores with 1 disabled by default so thats 6 vs 6+6. The Wesmere is going to outdo a PS3 without a doubt. When coupling the power of Nvidia GPUs and Xeons under a 2 million dollar budget.....yea. In another forum thats discussing this same thing a guy from the military says that Sony cut them a deal on the PS3s. I think given the circumstances Nvidia and Intel would probably cut them a deal on GPUs and CPUs so you could squeeze more into that 2 mil budget.
 
Nvidia GPUs when propery used makes even the baddest of CPUs scream for mercy...i fail to see you or your professors logic. By properly used i mean GPGPU, benchmarking, or gaming which includes proper coding for each.

Simple case, a single westmere Xeon has 6 cores + HT which = 12 threads. A single standard PS3 Core Cell has 7 cores with 1 disabled by default so thats 6 vs 6+6. The Wesmere is going to outdo a PS3 without a doubt. When coupling the power of Nvidia GPUs and Xeons under a 2 million dollar budget.....yea. In another forum thats discussing this same thing a guy from the military says that Sony cut them a deal on the PS3s. I think given the circumstances Nvidia and Intel would probably cut them a deal on GPUs and CPUs so you could squeeze more into that 2 mil budget.

Do you have a Ph.D in Computer Engineering? How many supercomputers have you built? You are basically saying that Gpu's are better yet providing no evidence to support you claims. Also I already discussed the discrepancy between the marketing/hype numbers you are looking at and how the parts actually perform on a level playing field
 
Do you have a Ph.D in Computer Engineering? How many supercomputers have you built? You are basically saying that Gpu's are better yet providing no evidence to support you claims. Also I already discussed the discrepancy between the marketing/hype numbers you are looking at and how the parts actually perform on a level playing field

*facepalm* Average Joes use GPGPU and CPU crunching on an everyday basis. In simple terms folding is a program optimized to take advantage of that power like you where just talking about in your previous post. You dont need to be an engineer or have a PHD or build super computers to see the exceptional difference in computing power between CPUs and GPUs. Only a fool would say a GPU isnt more powerful than a CPU when used in the same way. To be more specific, we wouldnt be slowly getting programs to encode ect on GPUs if CPUs could do it faster. Instead, its the other way around. People are screaming to do that processing on their GPUs just because they ARE so much faster. Numbers dont mean squat to me on a piece of paper, only evidence you can clearly see. Thats why i dont eat the hype over new GPUs or CPUs that arent even out yet. If my 4 core i5 can out fold a PS3, then my 465 can pretty much double even that...im gonna go out on a limb and say 6/8 core Xeons with HT paired with Tesla units will smash PS3s in computing power. On a 2 million dollar budget, thats alot of Xeons and Teslas no matter how much they cost individually. To put an argument aside before it arises, im using folding as an example of computing power, not in literal terms for a number to number argument.

Way old news? The article was posted 8 days ago....i dont call that old. The age of it is irrelevant though, considering the topic is on the computing power between A and B....not how old the topic is.
 
Status
Not open for further replies.
Back
Top Bottom