Are you people daft? Do you seriously think that these processors are going to be priced that high? You've got to be kidding me...you're taking the price of the highest-end, most expensive processor, and trying to guess the prices of the budget segment from that. Does that sound ridiculous to anyone else?
A E6300 isn't going to cost $247 and a E6600 isn't going to cost $425, which is what you'd get if you were looking at the price of the X6800. You always have price gouging at the very high-end segment. As was predicted on OC Forums, the price deviation shouldn't be more than around $20, higher OR lower.
reggie_da_man said:
I'm honestly not that impressed with Conroe. AMD is not screwed. Look at the preformance difference, it's minimal. It's nice not to have a biased benchmark for once. Imagine what will happen when AMD comes out with their 65nm CPU's. As I said a long time ago...if the Conroe truly does prove to be better I might make a switch but not now. I'm sticking with AMD.
You seem to have forgotten that the E6700 that they used in the [H]ardOCP benchmarks costs half as much as the FX-60, and still matches it (beats it actually). Gaming at high resolutions isn't an accurate measure of a processor's performance. Did you look at anything other than the [H] benchmarks? I posted like 5 or 6 more below that. Look at ANY of them, and no Athlon 64 even comes close to touching the E6600. A $316 ($20 more or less) that beats anything AMD has? Remember, most people don't game, and those that do, have absolutely no reason to go for AMD anymore, because Conroes are so cheap. AMD IS SCREWED.
gaara said:
and once again, there is no such thing as a freaking GPU bottleneck. I don't know how to make it clear, the CPU is only responsible for compiling information such as how certain enemies attack or how the environment reacts which literally has nothing to do with framerate, the GPU is responsible for collecting all the information the CPU sends it and drawing the appropriate picture as quickly as possible
Instead of trying to make it clear, why don't you try listening to someone else for a bit? You're absolutely correct in saying that you don't need an Uber strong CPU to play a game, but its not for the reasons you describe. Games are getting more and more complex by the day. What you described up there was the CPU handling the AI and the Physics in a game.
I'm going to break it off, since its a mini-essay in itself.
---------------------------------------
Now, the reason the GPU bottlenecks; A GPU has to recieve instructions. Those instructions have to come from the CPU. Now, a GPU is a much stronger processor than a CPU, in terms of raw power. However, the GPU has to perform much more complex operations than the CPU. The GPU has to actually execute, while the CPU just has to tell it what to do. In such a situation, you can imagine that the CPU is "stronger" than the GPU. The reason the bottleneck occurs is because the CPU is already sending more instructions to the GPU than the GPU can execute. To move up to a faster CPU, to increase the amount of instructions sent to the GPU, will not help you because the GPU is already executing as fast as it can. Therein lies the bottleneck.
Further support for this argument comes from our dear friend, Elder Scrolls 4: Oblivion. Now, you and I agree that a faster CPU won't help you in a game, for whatever reason. However, it is painfully clear in Elder Scrolls 4 that the CPU does make a difference. Now, if you're in a GPU-intensive environment, like outside in the forest, then there is a GPU bottleneck, and the CPU will not make a difference. However, once you get into a not-so-GPU-intensive area like the inside of a town, then that GPU bottleneck is removed (to a certain extent). But the reason for the FPS increase is because of the tasks of the CPU that we outlined above; calculating AI and Physics. Anyone that has played ES4:Oblivion knows how many characters there are in cities. That amounts to an ENORMOUS amount of AI and Physics calculations for the CPU. So much, infact, that a better CPU makes a world of difference. At this point, the CPU is actually being pushed, and a better CPU will improve your FPS.
I don't think I've mentioned this yet, but the reason a CPU doesn't make a difference in MOST of TODAY's games, is because even our lower-end CPUs are enough to handle the amounts of AI and Physics that are thrown at them. It is estimated that a Athlon 64 3200+ could easily be paired with a 7900GTX and would handle games that came out like 3 months ago. Maybe push that up to a A64 3500+ or 3700+ for today's games, and you're set...for now.
So why do people go for a Dual Core Opteron165 @ 2.8Ghz? Good question. But its the same reason that people should get a Conroe if building a new PC, even for gaming.
---------------------------------------
gaara said:
obviously these things are already hitting as high as 4GHz so I guess there's no reason for anyone to complain
I just feel the need to say: I told you so.
P.P. Mguire said:
Of Nubius was here, he would have a mile long explination of how there is no bottlenecking.
Contrary to popular belief, Nubius is not God. He CAN be wrong. His beliefs, like all of ours, are only founded on his own observations. There IS room for error. I have many MANY articles to back up my beliefs, I'm not seeing anything from anyone else.
The General said:
Yeah I am waiting til G80 to build my new computer.
E6600 for sure.
Indeed, exact same as you. I'm building in Jan '07, you?
BennyV04988 said:
And an FX-57 would kick the **** out of a sempron in gaming. what are you smoking? If your using anything better than a 6200 with a game less than oblvion, gpu limitation is minimal.
Not if it was a high-end Sempron. And it depends more on the processor than the videocard, determining where the bottleneck occurs. With a 3200+, you have to go higher than a 7900GT before you will get any increase by going to a higher processor.
The only time when you don't have the CPU as a bottleneck in high-res gaming is when you're using a system with something like Dual X1900XTXs in Crossfire, or a 7950GX2. That is what Anandtech did, which is why we got to see a significant and clear difference in the FPS in their benchmarks. [H]ardOCP used a single 7900GTX (LOL), and that wasn't strong enough to stress these behemoths of CPUs.
Which further supports my bottlenecking explanation. Seriously Gaara, are you going to argue against people like Anand of Anandtech and Kyle Bentet of [H]ardOCP? And me?