Twilight of the GPU.

Status
Not open for further replies.

zedman3d

Fully Optimized
Messages
2,850
2006-7: CPU's become so fast and powerful that 3D hardware will be only marginally beneficial for rendering, relative to the limits of the human visual system, therefore 3D chips will likely be deemed a waste of silicon (and more expensive bus plumbing), so the world will transition back to software-driven rendering. And, at this point, there will be a new renaissance in non-traditional architectures such as voxel rendering and REYES-style microfacets, enabled by the generality of CPU's driving the rendering process. If this is a case, then the 3D hardware revolution sparked by 3dfx in 1997 will prove to only be a 10-year hiatus from the natural evolution of CPU-driven rendering.

Read the above, and if you find it interesting then read the whole article. But i just wanted a discussion on the quoted section mainly.

Twilight of the GPU: an epic interview with Tim Sweeney: Page 1

Discuss.
 
The article makes no sense, current gpu's are many times faster than cpu's.

We have x86 based api's for both Nvidia and ATI.
 
No, theyre faster in any application run under x86.

Badaboom media converter, F@H, and theres also an mp3 encoder.

F@H has nothing to do with with graphics either, its nothing but number crunching.


Lots of people are taking advantage, check out all these articles:
CUDA in the News- NVIDIA
 
You can't say gpu's are faster under any x86 application because they can't even run every x86 application, only ones that are specially coded for them in C/C++. While those are popular languages they are not the only ones. There is also the issue of cuda not supporting things like recursion which is a very useful tool in programming.

Also encoding is an example of a application that can take advantage of the parallelism possible with gpu's. Folding uses an arbitrary point system so it can not be used as a accurate measure of performance between cpus and gpus although it may also be a similar situation to encoding.
 
I worded that wrong, i should have said theyre faster in any x86 based application coded for them. Sure gpu's can't run any/every program, but the ones they can run they do it much faster.

Its kinda like a drag car, fast as heck but limited in what it can do, where a regular car can do more but much slower.

I see folding is an accurate measure because the gpu and cpu are doing the same thing, both are using x86/SSE instructions to run the simulation.

If you look at the systray client and logfiles, theyre pretty much identical.

And the cores are the same too, for example the cpu clients use the Gromacs/ SMP Gromacs core while the gpu client runs off the GPUv2 Gromacs core.
 
I worded that wrong, i should have said they're faster in any x86 based application coded for them.

Not really. GPUs aren't x86. And there are only certain tasks like encoding and 3d rendering that benefit from massive parallelism. Most things the cpu would do better in. The gpu is pretty specialized and can do its job well. But there is reason we don't use gpus for everything. When you have something specialized like a GPU your only tool is a hammer and every problem looks like a nail. Needless to say hammering screws doesn't work very well.

We need cpus because they are general processing units. It is much faster and much cheaper to have a processor that is generalized handle most things than it is to have a bunch of super specialized processors. Gpus exist because we have decided that graphics is important enough to warrant its own cpu. Now it is true that the architecture best for graphics can also have speed benefits in other things, but there is no way a gpu can replace a cpu.


You know, Intel would actually argue the other side that CPUs are better at graphics than gpus. Case in point Larrabee. essentially 80 Pentium 2s stuck together. They are banking that it will be easier to code video games on an x86 chip that is dedicated to graphics than on a specialized gpu with its own machine code.
 
Not really. GPUs aren't x86. And there are only certain tasks like encoding and 3d rendering that benefit from massive parallelism. Most things the cpu would do better in. The gpu is pretty specialized and can do its job well. But there is reason we don't use gpus for everything. When you have something specialized like a GPU your only tool is a hammer and every problem looks like a nail. Needless to say hammering screws doesn't work very well.

We need cpus because they are general processing units. It is much faster and much cheaper to have a processor that is generalized handle most things than it is to have a bunch of super specialized processors. Gpus exist because we have decided that graphics is important enough to warrant its own cpu. Now it is true that the architecture best for graphics can also have speed benefits in other things, but there is no way a gpu can replace a cpu.


You know, Intel would actually argue the other side that CPUs are better at graphics than gpus. Case in point Larrabee. essentially 80 Pentium 2s stuck together. They are banking that it will be easier to code video games on an x86 chip that is dedicated to graphics than on a specialized gpu with its own machine code.


The whole point of cuda is to make gpu's perform under x86 instructions.

No, they are not natively x86...but with cuda, yes they are. And theyre not just a little bit faster man, theyre loads faster.

Even a quad-core running smp (which takes advantage of all cores) at 4ghz cant keep up with a 8800 series card.

If you look at all those articles, youll see how many people are adding gpu's to their arsenal.

Still dont believe me that F@H is an x86 based app??

Look at the logfile below from one of my gpu's, take note where it says the info about the compiler:

Code:
[21:51:14] - Ask before connecting: No
[21:51:14] - User name: ricanflow (Team 12864)
[21:51:14] - User ID: 3AAB9C1551DA749B
[21:51:14] - Machine ID: 1
[21:51:14] 
[21:51:14] Loaded queue successfully.
[21:51:14] Initialization complete
[21:51:14] 
[21:51:14] + Processing work unit
[21:51:14] Core required: FahCore_11.exe
[21:51:14] Core found.
[21:51:14] Working on queue slot 08 [December 1 21:51:14 UTC]
[21:51:14] + Working ...
[21:51:15] 
[21:51:15] *------------------------------*
[21:51:15] Folding@Home GPU Core - Beta
[21:51:15] Version 1.19 (Mon Nov 3 09:34:13 PST 2008)
[21:51:15] 
[21:51:15] Compiler  : Microsoft (R) 32-bit C/C++ Optimizing Compiler Version 14.00.50727.762 for 80x86 
[21:51:15] Build host: amoeba
[21:51:15] Board Type: Nvidia
[21:51:15] Core      : 
[21:51:15] Preparing to commence simulation
[21:51:15] - Looking at optimizations...
[21:51:15] - Files status OK
[21:51:15] - Expanded 96785 -> 489240 (decompressed 505.4 percent)
[21:51:15] Called DecompressByteArray: compressed_data_size=96785 data_size=489240, decompressed_data_size=489240 diff=0
[21:51:15] - Digital signature verified
[21:51:15] 
[21:51:15] Project: 5754 (Run 6, Clone 38, Gen 3)
[21:51:15] 
[21:51:15] Assembly optimizations on if available.
[21:51:15] Entering M.D.
[21:51:21] Will resume from checkpoint file
[21:51:22] Working on Protein
[21:51:24] Client config found, loading data.
[21:51:24] Starting GUI Server
[21:51:24] Resuming from checkpoint
[21:51:24] Verified work/wudata_08.log
[21:51:24] Verified work/wudata_08.edr
[21:51:24] Verified work/wudata_08.xtc
[21:51:24] Completed 24%
[21:53:33] Completed 25%
Yes, we still need cpu's im not debating that, but modern day gpu's are incredibly faster.

Now the first generation GPU client wasnt x86 based, it actually ran off directx (hence why it was much slower, and unstable as heck) and it relied off acting as if it were playing a game. this caused a lot of troubles down the line. The gpu's also only had pixel pipelines, instead of shaders.

For example:

Pentium 4 @ 3ghz = 300ppd
X1650XT = 500ppd
PS3 = 900ppd
Q6600 @ 4ghz = 3500ppd
8800GTS 512 = 6000ppd
 
No one said folding wasn't x86. I never said gpus aren't good for certain tasks, in fact I think ggpu applications are very promising.

The point I am making is gpus are not better than cpus at everything. Gpus are only better in applications that can take advantage of extensive parallelism. All of the applications you have listed can take advantage of that parallelism and that is why they are faster on gpu's.

Not all applications can take advantage of that level of parallelism. It's the same reason dual cores can beat quads in some applications, they can't use the extra two cores. Coding to take advantage of a quad is one thing, you are talking a maximum of eight threads which most tasks can be broken down into. A gpu takes this concept to the extreme. Take a GTX 260 core 216 for example, it has 216 "cores" that are individually massively slower than a core of a Core 2 but in situations that can use all 216 of them, combined, they can beat the 2 cores of the Core 2 duo. The problem is that allot of things can't be easily broken into 216 threads to take advantage of all of the gpu's "cores" which is why gpu's are only faster in certain situations.
 
Parallelism (computing - Wikipedia, the free encyclopedia)

You should really read this Rican. Wiki has a great article on parallelism. There are many hurdles that you have to overcome hardware and software wise to do it right. And there are actually times when making things parallel slows them down.

ILLIAC IV - Wikipedia, the free encyclopedia

You may also want to scan over this article. Essentially Parallelism gone wrong.



Puddle is right. Things like folding and encoding can be cut apart into hundreds of threads. But VMs, office apps, web browsers and a slew of other programs can't.

Gpu's aren't "faster" than cpus. Since performance is completely reliant on what they are computing. An SP can't stand up to a cpu core. And a gpu can't match the single threaded capability of a cpu either.
 
Status
Not open for further replies.
Back
Top Bottom