Upcoming 6850 and 6870: Oct. 18th

Status
Not open for further replies.
BOINC is just a program that handles individual projects written to use BOINC. I used to do Seti@home before their servers stopped giving out WUs and my 465 was tearing through units like crazy. But, milkyway@home is a different story due to double precision performance on fermi desktop chips.
 
According to a recent article i read, if your a folder and want a card dedicated to folding then the 460 is the best bet. ATI cards dont shake a stick at Nvidia cards for folding.

When did I mention folding? I was talking about gpgpu in general and my point was confirmed by Zmatt who has experience in the field.

Has F@H even added support for Evergreen cards yet?
 
Folding is a type of gpgpu usage, so i brought it up. There is more folding than just Standfords excuse for folding. Its more than just straight memory and SPUs in gpgpu applications. You have double and single precision to contend with too which also plays a part in which camp you want to go with. For instance desktop variants of Fermi have below average double precision performance and ATI cards walk all over them. Its different on the HPC front though. I brought up folding because thats a standard gpgpu application that consumers like you and i deal with. If your in the professional market for HPC then you probably wont care about the SPUs and performance of desktop counterparts.
 
Has F@H even added support for Evergreen cards yet?

Yes, gpu3 runs them without any flags. However do to issues with Openmm or whatever, they still aren't running optimally. ATI and FaH are trying to get it worked out though.

Stanfords isn't crap, its just doesn;'t need to be any more complicated due to the way they simulate atom motion. Plus when they need the complicated stuff done, they kick it over to the 24 core monster machines.

Even I know F@H isn't too complicated interms of gpgpu work. But I think it was one of the first to utilize desktop cards... ATI cards ironically.
 
Folding is a type of gpgpu usage, so i brought it up. There is more folding than just Standfords excuse for folding. Its more than just straight memory and SPUs in gpgpu applications. You have double and single precision to contend with too which also plays a part in which camp you want to go with. For instance desktop variants of Fermi have below average double precision performance and ATI cards walk all over them. Its different on the HPC front though. I brought up folding because thats a standard gpgpu application that consumers like you and i deal with. If your in the professional market for HPC then you probably wont care about the SPUs and performance of desktop counterparts.

You have a good general understanding of GPGPU, but you are lacking in indepth information in programming and how architectures work.

Folding is one example in a broad field. In fact the reason Folding works better on nvidia cars has very little to do with the cards and everything with the way it is coded. They did release an optimized version for ATI cards some time ago and it made a big stink in the community because all of the dedicate folders had invested heavily in Nvidia hardware. They changed it back pretty quick. And single and double point precision doesn't tell you anything about the math used or how it works, that just tells you how detailed the results are. that's pointless in assessing an architecture. When you get into program design especially in something like gpgpu you have to make some major choices about how the program will work. Just don't make a while loop and be done with it. Optimizing code for a specific architecture is low level stuff and is pretty complicated. I wish I could speak more on it, but that wasn't my area. Without knowing the code in Folding@home I would argue that an ATI optimized variant would be faster than an Nvidia optimized variant simply because of the size and number of protein sequences they analyze. Protein analyzing to my knowledge isn't difficult on it's own. But when you run an entire enzyme or sequence a strand of nucleotides the work adds up. That would lend itself well to being broken down into many more small bits.

An over simplified generalization would be that a good use of Nvidia hardware (if you are doing all of the dev) is something that although has the ability to be heavily parallelized, it has a large memory requirement. For ATI I would say something that can be obnoxiously broken down and doesn't require as much memory. Of course there are other considerations that need to be made.
 
For the record...... no gpu client has come out that was opytimized for ATI cards other than the original gpu client.

nvidia cards start to chugg when they get larger units... and yes nvidia folders cry alot when their ppd drop.
 
You have a good general understanding of GPGPU, but you are lacking in indepth information in programming and how architectures work.

Folding is one example in a broad field. In fact the reason Folding works better on nvidia cars has very little to do with the cards and everything with the way it is coded. They did release an optimized version for ATI cards some time ago and it made a big stink in the community because all of the dedicate folders had invested heavily in Nvidia hardware. They changed it back pretty quick. And single and double point precision doesn't tell you anything about the math used or how it works, that just tells you how detailed the results are. that's pointless in assessing an architecture. When you get into program design especially in something like gpgpu you have to make some major choices about how the program will work. Just don't make a while loop and be done with it. Optimizing code for a specific architecture is low level stuff and is pretty complicated. I wish I could speak more on it, but that wasn't my area. Without knowing the code in Folding@home I would argue that an ATI optimized variant would be faster than an Nvidia optimized variant simply because of the size and number of protein sequences they analyze. Protein analyzing to my knowledge isn't difficult on it's own. But when you run an entire enzyme or sequence a strand of nucleotides the work adds up. That would lend itself well to being broken down into many more small bits.

An over simplified generalization would be that a good use of Nvidia hardware (if you are doing all of the dev) is something that although has the ability to be heavily parallelized, it has a large memory requirement. For ATI I would say something that can be obnoxiously broken down and doesn't require as much memory. Of course there are other considerations that need to be made.

Im not speaking on terms of just general folding either. Each folding client in itself supports different things because of different coding like you said. For instance, Fermis desktop variants have a low Douple Precision performance value because they are game oriented. Meaning that Milkyway@home yields faster PPD for ATI cards like a 5870, and my Fermi doesnt even have an API for it yet. They said simply because Fermi is to slow for them to care. Yet on the other hand you have Standfords F@H which utilizes the Fermi chips to their full extent and doesnt utilize ATI. See what im saying? I understand what your saying on GPGPU honestly. My main point of the matter is though, that standard consumers like you and i wont be looking at SPUs performance for the sole purpose of GPGPU. Its all about gaming performance on desktop variants and so therefor companies like Nvidia and AMD put their GPGPU raw performance on their HPC variants. In other words to them people who buy a desktop video card for GPGPU performance is a minority in their eyes, and Nvidia openly admited that in one of their online Live chats on CUDA.

As for the memory requirement, thats why HPC cards have large amounts of ram :)
 
Im not speaking on terms of just general folding either. Each folding client in itself supports different things because of different coding like you said. For instance, Fermis desktop variants have a low Douple Precision performance value because they are game oriented. Meaning that Milkyway@home yields faster PPD for ATI cards like a 5870, and my Fermi doesnt even have an API for it yet. They said simply because Fermi is to slow for them to care. Yet on the other hand you have Standfords F@H which utilizes the Fermi chips to their full extent and doesnt utilize ATI. See what im saying? I understand what your saying on GPGPU honestly. My main point of the matter is though, that standard consumers like you and i wont be looking at SPUs performance for the sole purpose of GPGPU. Its all about gaming performance on desktop variants and so therefor companies like Nvidia and AMD put their GPGPU raw performance on their HPC variants. In other words to them people who buy a desktop video card for GPGPU performance is a minority in their eyes, and Nvidia openly admited that in one of their online Live chats on CUDA.

As for the memory requirement, thats why HPC cards have large amounts of ram :)

I didn't see you say that before. I was just correcting some inaccuracies. I agree that GPGPU is secondary. They make video cards first. And honestly I think it's a fad. GPU's are are really really good at certain things. But the problem has to be one that you can apply extreme reductionism (read: paralellize) to. Some things you can only run in a procedural manner, some things can be broken into a few threads. And some problems can be broken into thousands of tiny bits. A lot of people have jumped on the bandwagon knowing little about GPU architecture and assuming everything is faster on a GPU. That simply isn't true. I'll also say again, single and double precision don't mean much as words on their own. Double precision of what? Double precision means you have twice as many places in a value it's the difference between calling Pi 3.1 and 3.14. This depends a lot on how you are doing it and what you are looking for. Double precision just implies you want a more precise answer. it says nothing of how you get there. When people say single and double precision in terms of performance they most often mean in relation to running benchmark software such as Linpack, drystone, wetstone, Sandra, or 3dmark etc.
 
Just what is AMD doing with video cards? Are they just doing away with the ATi Branding and continuing video cards? Or are they doing some thing a lot more.

Sorry to sound like a n00b. Ever since I moved away from SOHO and into Corporate, I have been out of touch with the lastest hardware trends.
 
Just what is AMD doing with video cards? Are they just doing away with the ATi Branding and continuing video cards? Or are they doing some thing a lot more.

Sorry to sound like a n00b. Ever since I moved away from SOHO and into Corporate, I have been out of touch with the lastest hardware trends.

Basically they are just dropping the "ATI" name. They also seem to be much more on top of things as far as releases and drivers go.
 
Status
Not open for further replies.
Back
Top Bottom