US Air Force uses PS3s for Air Defense

What Would Crunch Faster?

  • 1700 PS3s + server blades

    Votes: 0 0.0%
  • Hex/Octo Xeons + Tesla

    Votes: 0 0.0%

  • Total voters
    0
Status
Not open for further replies.
*facepalm* Average Joes use GPGPU and CPU crunching on an everyday basis. In simple terms folding is a program optimized to take advantage of that power like you where just talking about in your previous post. You dont need to be an engineer or have a PHD or build super computers to see the exceptional difference in computing power between CPUs and GPUs. Only a fool would say a GPU isnt more powerful than a CPU when used in the same way. To be more specific, we wouldnt be slowly getting programs to encode ect on GPUs if CPUs could do it faster. Instead, its the other way around. People are screaming to do that processing on their GPUs just because they ARE so much faster. Numbers dont mean squat to me on a piece of paper, only evidence you can clearly see. Thats why i dont eat the hype over new GPUs or CPUs that arent even out yet. If my 4 core i5 can out fold a PS3, then my 465 can pretty much double even that...im gonna go out on a limb and say 6/8 core Xeons with HT paired with Tesla units will smash PS3s in computing power. On a 2 million dollar budget, thats alot of Xeons and Teslas no matter how much they cost individually. To put an argument aside before it arises, im using folding as an example of computing power, not in literal terms for a number to number argument.

Way old news? The article was posted 8 days ago....i dont call that old. The age of it is irrelevant though, considering the topic is on the computing power between A and B....not how old the topic is.

And "average" Joe doesn't have a clue how it works. It's like asking a frequent flyer about aerodynamics and jet engine design.

Also F@H is hardly the killer proof for your augment, last time I checked the point values are arbitrary so using them to compare performance is meaningless. Also if I'm not mistaken cpu's and gpu's don't even do the same types of work.

No offense but the Beckton + Tesla counterpoint isn't a good idea. For starters at best you just doubled you software development costs by throwing in two different architectures that are both going to require unique code. Although in the real world it is likely quite a bit more than double the cost because of the aforementioned difficulty of coding for gpu's. . Finally I don't think you realize just how much a Beckton server costs, a Dell Poweredge R910 with 2x 2ghz 8 core Xeons starts at ~$20,000. For that price you can get nearly 70 PS3's which will likely outperform it by a significant margin.

The last time an article was posted about PS3's I criticized them for not using GPGPU too but since then I learned just how uninformed I was about how HPC is really done. Needless to say approaching HPC from the perspective of PC gaming and overclocking doesn't work very well.
 
Nvidia GPUs when propery used makes even the baddest of CPUs scream for mercy...i fail to see you or your professors logic. By properly used i mean GPGPU, benchmarking, or gaming which includes proper coding for each.

Simple case, a single westmere Xeon has 6 cores + HT which = 12 threads. A single standard PS3 Core Cell has 7 cores with 1 disabled by default so thats 6 vs 6+6. The Wesmere is going to outdo a PS3 without a doubt. When coupling the power of Nvidia GPUs and Xeons under a 2 million dollar budget.....yea. In another forum thats discussing this same thing a guy from the military says that Sony cut them a deal on the PS3s. I think given the circumstances Nvidia and Intel would probably cut them a deal on GPUs and CPUs so you could squeeze more into that 2 mil budget.
There are two different types of performance: throughput performance, and latency performance.

Throughput performance is the number of instructions processed in a given amount of time
Latency performance is the time it takes between an instruction being issued and completed

Latency and throughput performance are quite often at odds with each other - which even Jen Hsen Huang admitted.

High throughput performance on GPU's is achieved by having a lot of parallel processors.

CPU's tend to be much better at latency-based performance, however.
A lot of work done by CPU's is very difficult to effectively split into parallel tasks. Whereas the workloads typically done by GPU's is easily parallelized.

The difficulty of parallelizing CPU workloads is that the execution of many instructions depend on there being results from previous instructions.

When it comes to HyperThreading, essentially what it is, is a method of reducing the time the CPU spends idle whenever there's a stalled thread.
The core doesn't process two threads at a time. It is just issued two threads, such that if the prefetching algorithms are wrong and the CPU has to fetch the right data from RAM before continuing, instead of just sitting and waiting for that data before continuing, it processes the other thread.

It can increase throughput performance, but not latency performance - it can even cause a latency performance decrease.
When the CPU fetches the right data from RAM so that it can continue the stalled thread, with HyperThreading, quite often it would have to either wait for the second thread to finish, or take time to stop the second thread and save its state and then continue processing the first thread.
 
Status
Not open for further replies.
Back
Top Bottom