DX 10 & next gen GPU information

Status
Not open for further replies.

macdawg

Daemon Poster
Messages
813
The R600 itself can consume up to 300 watts! There goes the power bill, not only will I wait for dx10 cards, I will wait until they get more effective per watt!

* The R600 is estimated to be the largest GPU ever made. Production based on an 80 nm process with ATI having 65 nm clearly in sights. [1]
* TSMC has only just begun 65 nm of simple designs.[2],
* The R600 might have over a 1000 MHz clock, making it the highest clocked GPU ever made.[3]
* The R600 will be the first graphics card faster than a X1800GTO able to do dongleless Crossfire (no master card). The X1800XT, X1900GT, X1900XT, and X1900XTX all require a slightly slower master card to run Crossfire. [4]
* 64 unified shader pipelines, 32 TMUs, and 32 ROPs.
* It shares a similar design of the Xbox 360's GPU "Xenos". However, the R600 will not have the 10 MB daughter embedded DRAM framebuffer of the "Xenos".
* The R600 will support the upcoming GDDR4 memory interface with 512 MB running higher than the X1950XTX's memory clock speeds of 1.0 GHz (2.0 GHz effective).
* The current target release date is December 2006. [citation needed]
* Designed "from the ground up," according to sources at ATI, for DX10, however it will be backwards compatible with DX9.
* Designed for Windows Vista
* PCI-E Interface
* HDMI connector (may have support for Displayport )
* Anandtech reports that the next generation of GPUs will range in power consumption from 130W to 300W. This increase in power consumption will make for higher-wattage Power Supply Units (to the 1 kW-1.2 kW range) and/or the addition of internal, secondary PSUs solely for powering the GPUs.[5]

* The G80 is expected to be released anywhere from October to November of 2006.
* It will be the first Direct3D 10 and Shader Model 4.0 card to be released.
* The G80 will be a 32 pipeline card with 32 pixel shaders, 16 vertex shaders, and 16 geometry shaders per core. The vertex shader pool and geometry shader pool may be unified.
* There will be 2 cards released at launch, the flagship 8800GTX and the slightly slower 8800GT. The 8800GTX will have 512 MB of ram while the 8800GT will have from 256 MB-512 MB of RAM and will also have slower core and memory clock speeds, and possibly fewer pipelines, than the 8800GTX.
* The launch price of the 8800GTX will be $499-599 while the 8800GT will initially cost $299-399. [1]
* Following the introduction of the flagship 8800 range, NVidia will introduce cheaper, slower versions of the card with one or more of the following reductions in capability: (a) reduced core clock speed; (b) reduced memory clock speed; (c) reduced memory bus width; (d) reduced pipeline count; (e) reduced chip count (if the 8800 is, in fact, a dual chip part); and / or (f) reduced complement of memory. If previous the NVidia naming convention is followed, mid-range products will be identified by model number 8600, and low-end parts will be named 8300.
* The G80 will not have unified shaders, but will still be DirectX10 compliant.
* The memory interface is said to be Samsung's new GDDR4 specification.
* The series is also rumored to have fast memory and core clocks, but will still be built on 90nm technology to avoid unnecessary risks.
* Anandtech reports that the next generation of GPUs will range in power consumption from 130W to 300W. This increase in power consumption will make for higher-wattage Power Supply Units (in the 800 to 1100 Watt range) and/or the addition of internal, secondary PSUs solely for powering the GPUs. [2]
* The G80 should also feature UDI connections with full HDMI support.


Anyone understand this about the G80? Its DX10 compliant but technically still a DX9 card?????
 
macdawg said:
* Anandtech reports that the next generation of GPUs will range in power consumption from 130W to 300W. This increase in power consumption will make for higher-wattage Power Supply Units (to the 1 kW-1.2 kW range) and/or the addition of internal, secondary PSUs solely for powering the GPUs.

Wow...just wow. This is getting rediculous. You shell out $500+ for a top-of-the-line GPU and ON TOP OF THAT, you probably have to get a new PSU, because nVidia and ATI cannot make their GPUs such power sucking monsters. Where the **** is this going to go?? 500W GPUs are probably in sight now.

Absolutely rediculous. :mad: :angry:
 
beedubaya said:
So my Fortron Source 500W PSU will also need replaced when I upgrade my graphics card?


Most Likely.

I dont get why PSU's now have 4 12V rails with like 15 amps, I mean I dunno Id rather have one huge 12V rail with like 60 amps or somthing.


:confused:
 
Status
Not open for further replies.
Back
Top Bottom