how many Shader Processors equals 1 Pixel Pipeline or vice versa?

Status
Not open for further replies.

Wildside

Hellfire!
Messages
3,038
Location
Riverside, CA
reason i ask is because ever since the 7950 GX2 came out, that was the last time specs about a graphics card that told how many pixel pipelines it had, which is 48.

now a days with newer graphics cards, it's measured in shader processors which confuses me how many pixel pipelines it even has anymore. Why do all the manufactures measures their pixel "power" in shader processors instead of pixel pipelines? More accurate? Society changed it for us to believe this is a better understanding?

i have always thought that the higher pixel pipelines, the better n ever since the HD 2900 XT came out with 320, i thought it was good to have more, but learned that the bandwidth matters, not the # of shader processors.

since everyone is measuring pixels in # of shader processors, how is this measure to find out how many pixel pipelines or vice versa the newer graphics cards have?

i need the education so teach me plz :), thx
 
This is from GotFrag Hardware:

GotFrag Hardware said:
A unified shader is one that is capable of executing both vertex and pixel shading instructions. Whereas in the past the number of pixel shaders and vertex shaders were carefully balanced and differed from card to card, unified shaders allow manufacturers to use more where more are needed. Imagine having a set of 10 dudes on either side of a wall. All of a sudden, a whole bunch of work is dumped on one side of the wall that will take the 10 dudes 2 hours to complete. Meanwhile, the other 10 dudes are just chilling on the other side of the wall, not only because there is a wall in the way, but because they speak a different language than the other 10 dudes so they wouldn't be able to do the work that they are doing in the first place. Now imagine all 20 dudes are bunched together and all speak a singular hybrid language. That same work load is dumped on the 20 dudes and they are able to knock it out in 1 hour.
 
Just forget about pixel pipelines, they are noting.


There is some thing called pixel shader units, TMU (texture mapping units), ROP (render output pipelines) and vertex shader units

The more pixel shader units you have, the more pixel shader operation you have
The more vertex shader units you have, the more vertex shader operation you have
The more TMU you have, the more texture fill-rate you have.
The more ROP, the more pixel fill-rate you have.

..................................

In newer cards there is something called Unified Shaders, and the difference is that Unified Shader can do both pixel shader and vertex shader operations
 
Just forget about pixel pipelines, they are noting.


There is some thing called pixel shader units, TMU (texture mapping units), ROP (render output pipelines) and vertex shader units

The more pixel shader units you have, the more pixel shader operation you have
The more vertex shader units you have, the more vertex shader operation you have
The more TMU you have, the more texture fill-rate you have.
The more ROP, the more pixel fill-rate you have.

..................................

In newer cards there is something called Unified Shaders, and the difference is that Unified Shader can do both pixel shader and vertex shader operations

ok now that makes more sense to me, thx lol.

question though, if this is true, then y is the HD 2900 XT benchmarks not always better then the 8800 GTS 640mb or 320mb? The HD 2900 XT has 320 shader processors which is 2-3 times more then both 8800 GTS cards. From what i read, something with the "bandwidth" makes the 8800 GTS better then HD 2900 XT, probably because its higher then the HD 2900 XT. I dont understand it too well, mind if u explain this to me plz?
 
question though, if this is true, then y is the HD 2900 XT benchmarks not always better then the 8800 GTS 640mb or 320mb? The HD 2900 XT has 320 shader processors which is 2-3 times more then both 8800 GTS cards. From what i read, something with the "bandwidth" makes the 8800 GTS better then HD 2900 XT, probably because its higher then the HD 2900 XT. I dont understand it too well, mind if u explain this to me plz?

The reason why HD2900XT doesn't perform as good as 8800GTS in some games is because of the low number of TMU. HD2900XT has 16 TMU, 8800GTS has 24, and GTX/Ultra has 32

There might be other reasons, but this is the only thing that I know.


EDIT: I also found a post from other forum that explains to you why the 320 stream processors in HD2600XT is not similar to the 128 stream processor in GTX

The post can be found here
8800 GTX vs. HD 2900 XT - SLI Zone Forums

And I will quote the post here

You guys need to read the fine print. The 2900XT does not have 320 Stream procs like the 128 on the 8800GTX. The ATI more acurately has 64 packs of 5 stream procs. While each stream proc can handle up to five operations per clock, they can only work on one stream at a time. This equates to the GTX calc 128 parralel threads and the 2900 calc 64 parralel threads yet the 2900 can POTENTALLY calc 5 instructions per thread per clock while the 8800 only calc's one. Unfortunantly, while the 2900 can keep all 64 5 block shaders busy, it is difficult to keep all 5 blocks in a shader busy, meaning the 2900 isnt always running 5 operations on a thread. On a worst case senerio (not executing 5 instructions per thread) the GTX does twice the work per clock (128 vs 64), and on a best case senerio (executing 5 instructions per thread) the 2900 does 2.5x's the work than the GTX (128 vs 320). You must also realize the 8800 sharder clock is much higher (~1.5 vs ~.800). Performance will depend more on the way the game is programed with the 2900 XT. Point being, comparing stats is like comparing apples and oranges. These are very much diff types of cards.*
 
The reason why HD2900XT doesn't perform as good as 8800GTS in some games is because of the low number of TMU. HD2900XT has 16 TMU, 8800GTS has 24, and GTX/Ultra has 32

There might be other reasons, but this is the only thing that I know.

Not to mention ATI can't write drivers to save their lives.
 
Status
Not open for further replies.
Back
Top Bottom