Conroe Reviews Released

Status
Not open for further replies.
Ok, I'll reference the very first benchmark here

Apart from one instant, the average framerate didn't vary more than ~2FPS between the different CPU speeds, this alone implies that:

1) All CPUs of the same family operate at the same FSB/HTT speed regardless of internal clock frequency, therefore all I/O bandwidth will be equal among all cores of that family. Average I/O bandwidth of modern processors is over 6GB/sec which is nowhere near the size of an instruction set or compiled data, therefore it takes a fraction of a second to send instructions from point-to-point

2) A GPU and a CPU are dedicated independant components working at seperate speeds simulteanously. Therefore the speed of either the CPU or GPU will have absolutely no bearing on the speed of the other component as they are never directly communicating with one another, nor are they dependant on one another to operate. A GPU will always have a pixel fillrate of x pixels per second and a CPU will always have a set number of FLOPS regardless of the capabilities of either

3) Rendering is entirely dependant on the pixel fillrate of the GPU, the CPU has absolutely no workload in regards to drawing the image you see on your screen. Therefore as illustrated in the benchmarks, a bottleneck is not present (let me remind you that a bottleneck is a slowest communication in a set of operations). The reason the framerates are the same is because the GPU has reached the limit in regards to how quickly it can render the image, and seeing how the CPU has no contribution to the process of rendering the image, a faster CPU will not attribute a faster rendering process. Likewise, adding a faster GPU will not result in an increased number of calculations of physics laws in a game as a GPU has no contribution to the process of executing physics information.

Let me repeat this last part, neither part are bottlenecking each other. Once you reach a steady and unpassable framerate it simply means that one component is limiting ITSELF, it has absolutely nothing to do with the workload of the other component

Now, by your definition, and according to "all of your supposed benchmarks", the framerate should greatly vary among different clocked CPUs. I have illustrated that this is not the case, and now I would like to ask you why exactly you think I am wrong, and why exactly you think you are right, apart from simply telling me I am wrong and not justifying anything you have said

I don't even know why I'm wasting so much time trying to clarify this, I'm freaking sick of people getting misconceptions about what a bottleneck is and what is simply means to have a slow piece of hardware...the only bottleneck that exists in PCs is the FSB/Mem Bus ratio and any overclocker who uses memory dividers should know exactly what I'm talking about and what EXACTLY a bottleneck is
 
the cpu and gpu form a chain to your monitor. which ever one can't forward the information fast enough becomes the weakest link and slows down the process.

THE CPU CANNOT MAKE A PICTURE
THE GPU CANNOT THINK

They MUST work TOGETHER to create a fluid series of images. If one is slower at its job then the other, the process is delayed and is know as bottlenecking. The slower party in the process is responsible for the bottlenecking.


Gaara you are thinking of bottlenecking as in a dictionary. meaning the delay occurs within the piece of harware itself. IE fsb etc.

WE ARE TALKING ABOUT A CHAIN OF HARDWARE BEING SLOWED DOWN BY THE WEAKEST LIN. IT IS STILLLLL BOTTLENECKING!!!!!!!
 
Sorry Gaara, that's just not true. I got some other people's opinions on the matter. Exact same stuff that I've been saying.

The CPU runs the game code pure and simple. The CPU handles AI, Physics, motion (which I guess is part of physics), game events, etc. The GPU is just a co-processor, it does not know what it should render until the CPU executes the proper game code first.

Now, as for why there is the limitation: CPUs have advanced to the point that most game code is a cinch. The CPU gives the GPU the information it needs, then waits until the GPU is done drawing what the CPU told it to draw. Until the GPU is finished rendering the scene the CPU instructed it to, the CPU knows not to keep going ahead.

Imagine a military drill instructor calling out commands for a number of marching recruits. The instructor could call out commands at a much faster rate than the recruits could physically march, but that would not speed up the pace at which they marched them, they would still be limited by their own feet.

There comes a point where a CPU is already providing enough data to the GPU, and a faster CPU wouldn't make any difference as long as the GPU is the bottleneck.
 
The GPU is not a freaking bottleneck though, I swear to god you're just repeating everything I've said and trying to attribute a "bottleneck" to it somehow.

I've said that:

Games are not dependant on the CPU, they are dependant on the GPU, so obviously as I've always said regardless of whether you have a 3000+ or a 5000+ paired with the same video card in the end it's the video card responsible for the FPS therefore won't see any change in FPS regardless of CPU speed

OBVIOUSLY adding a faster GPU is going to attribute a higher framerate, but all you are doing is is ADDING A FASTER INDEPENDANT COMPONENT. It has NOTHING to do with the CPU load and vice versa. All you're saying is a 4000+ can execute a SuperPI 1M faster than a 3000+, therefore the 3000+ is a bottleneck. It's a FASTER INDEPENDANT COMPONENT, AND IT IS IMPLIED THAT A FASTER COMPONENT TRANSLATES INTO A FASTER EXECUTION OF AN OPERATION.

Likewise, a 10000RPM drive can locate and extract data than a 7200RPM, therefore I assume you consider a 7200RPM a "bottleneck"? That's all your saying

I have one question for any of you who still believe in a GPU bottleneck, Where exactly is this invisible bus that causes this bottleneck? As like I've said for the 3rd time, a bottleneck is a slowest in a set of operations, or in laymens terms, a pathway that can't recieve the data as quickly as it's being sent. Perhaps if you had a 7900GT in a PCI bus then it'd be a bottlenecking as that bus speed can't compliment the throughput bandwidth of the card, but PCI-E 16x is MORE than enough for any video card. So tell me, where is this invisible bus that is supposedly slowing everything down?

Gaara you are thinking of bottlenecking as in a dictionary. meaning the delay occurs within the piece of harware itself. IE fsb etc.
That is what a bottleneck is though, you guys are just choosing to call something a bottleneck without understanding what the term really means. In order to bottleneck a GPU you'd either have to decrease the VRAM speed or somehow modify your card so a slower bus accepts it than cannot compliment it's throughput. That is all a bottleneck is, a limitation in the throughput. You're trying to think of it as a limitation in the IPC rate of a core
 
gaara said:
That is what a bottleneck is though, you guys are just choosing to call something a bottleneck without understanding what the term really means.

Who cares, my god, we all know what heÂ’s talking about. Who gives a crap if English Oxford dictionary doesnÂ’t? We donÂ’t use allot of words how theyÂ’re suppose to be used anyways. As long as we know what heÂ’s talking about it does it really matter? YouÂ’re attacking his grammar and not his points. Definitely losing focus.
 
No because he's suggesting that somehow a CPU has this huge impact on framerate that isn't utilized because of a GPU "bottleneck", I'm telling you all that A) the CPU doesn't really dictate the framerate at all and B) the two cannot bottleneck one another as they are dependant and never directly communicate with one another nor do they rely on one another to complete their own operations

A GPU has a limit just like all other hardware does, that DOES NOT make it a bottleneck, it's so simple I don't see why it's so hard for people to wrap their minds around it. It has nothing to do with grammar, and the only reason I addressed that was because someone else brought that up, I'm talking about how a computer operates and how the framerate cap has nothing to do with a literal bottleneck
 
gaara said:
The CPU doesn't really dictate the framerate at all.

I see. THEN WHY OH WHY IS THERE SUCH A MAGNIFICIENT DIFFERENCE IN FPS BETWEEN A PENTIUM-D 820 AND A X6800 CONROE? This is 1600x1200 resolution, by the way.

12588.png
 
is the 6300 really that much better on everything or just this game ? and how is it so much better on 1.8 Ghz

how much will the 6300 cost ?
 
Status
Not open for further replies.
Back
Top Bottom