How identical do graphics cards need to be for a dual setup?

How did you fry it exactly? It's 99.9% impossible to damage your card by conventional means of overclocking with all the safety features put in place.

Sweet on the board. Free upgrade sort of lol.
 
Well that's exactly the point, I was overclocking it moderately using the manufacturers own tweaking hardware, so for it to get fried it must have been defective from the get go, which is why I'm probably going to get a new one.

The only thing I did is go into the GPUTweak software (from Asus) and turn up the GPU clock and the memory clock. GPU I turned up to 1080 from 1020 and the memory I turned up to 7700 from 7000. I tried a bit higher on the gpu clock, but it was unstable in the 3dmark, so I turned it down, 1080 was the highest stable setting.

If I'd been using more "creative" ways to overclock it and actually managed to go outside of the safe limits and override the controls, then it'd be on me and I'd have to buy a new one out of pocket.

On a slightly related note, how does overclocking work if you have more than one graphics card? Can an SLI setup be overclocked at all? Do they have to be overclocked as a set or do you overclock each one separately?
 
Last edited:
There is literally no way for a manufacturer to know if you overclocked using simple software. My go to is MSI Afterburner and the slider limits for voltage (the only thing that can kill really) are controlled via the bios so you can't go further than the VRM will allow. Usually lower with Nvidia. With Nvidia if a clock is too high or the card gets too hot the driver simply crashes and resets everything to default until you open the program back up.

On that note, if and when you ever OC you need to adjust the fan ramp profile to make sure your card stays cool and monitor temps closely.

When you OC with a SLI setup the 2 cards get the clocks applied at the same time. One mimics the other. In a SLI setup, VRAM is mirrored so VRAM clocks match each other, as do core clocks, fan speeds, and voltage. If you SLI for example, a reference eVGA 780ti and a "Superclocked" 780ti the clocks of the Superclocked card are downclocked to match that of the 780ti and they boost equally. Boost will be limited to the weaker card (either/or).

To give you an example of how it's pretty hard to kill a card without serious modifications or stupidity. I'm running a 680 FTW+ that has a beefed up VRM circuitry compared to regular 680s and I'm running a modified bios. I upped the voltage to always run maxed and upped my boost clock to 1300 by default. The thing here is Nvidia volt limits most of their cards so if I set a 1.3v it will still hard limit itself to 1.215 (which sucks). I also raised the TDP% to 168% so it will utilize all the volts the hardware will allow. Basically what that equates to is I did a lot of work only to be limited to about 1300MHz clocks solid because of that hard volt limit Nvidia put in place. Lame.

When you overclock Nvidia cards you have to realize that the base core clock gets boosted by GPU Boost (2.0 on the 700 series and 900 series). What this means, is the card will automatically boost your clocks to as high as the limit that is set by the software you're running. So in your case, that would be heat. GPU Boost 1.0 on the other hand (like on my card) boosts to a specific TDP limit which is 131%. Back to your case, if you set a base core clock higher and can't contain the heat your card will only boost as high as the set thermal limit you set. If you set this to 90c with your default base clock it will boost until your card runs at 90c (not real safe). If you raise your base clock and your card hits 90c it won't boost any further than before. So basically the best way to get the most out of your card is set a high thermal limit to something around 85c, max your GPU fans out, and see how high it boosts. When you do this usually you'll get a higher boost clock when you raise the base clock because you're keeping temps down.

Another thing to remember here is, if you're using a non reference GPU (like Asus DCU2, eVGA ACX, Gigabyte Windforce) your GPU heat is being dumped into your case. So if you don't have this heat properly expelled then you'll start having heat soak issues thus making it harder for your GPU to boost higher. This is why I personally prefer the reference blower style coolers to blow all of that heat out of your case and to make sure your GPU is sucking in cold air all the time. The alternative to this being, point a large fan at your PC and go to town lol.
 
Last edited:
Back
Top Bottom