Nvidia Mulls Over Porting PhysX to OpenCL

Status
Not open for further replies.

Muffin Man

muffin enthusiast
Messages
2,724
Location
new york
Nvidia Mulls Over Porting PhysX to OpenCL - X-bit labs

"Nvidia May Allow GPU PhysX to Work on ATI Hardware

Nvidia Corp. said that it could eventually port computing of physics effects created using PhysX middleware to OpenCL application programming interface (API) and capable hardware. This may actually enable acceleration of physics effects processing on graphics chips to work on ATI Radeon hardware.

At present PhysX middle-ware is used to make physics effects on various platforms, including video game consoles and personal computers. In virtually all the cases processing of physics effects is performed on the central processing units – x86 chips in case of PC, Cell in case of PlayStation 3 or PowerPC in case of Xbox 360 – however, there are handful of games that can take advantage of physics computing on Nvidia GeForce graphics processing units (GPUs) that support CUDA technology. The latter is virtually Nvidia's proprietary API and is naturally not supported by chips developed by ATI, graphics business unit of Advanced Micro Devices. Due to such limitations not a lot of game developers are implementing GPU PhysX and enabling acceleration using DirectCompute or OpenCL is one of the best ways to popularize processing of physics effects on graphics processors.

“In the future it is a possibility that we could use OpenCL, but at the moment CUDA works great. [Our GPU] architecture allows for acceleration by other things like OpenCL. Nvidia works very closely with The Khronos Group, actually Neil Trevett is president of the group and he's part of Nvidia, so we've been driving that standard also, and it's an excellent standard,” said Nadeem Mohammad, a director of PhysX product management at Nvidia, in an interview with Bit-tech web-site.

Porting PhysX to OpenCL is a natural thing to do since the standard is supported both by central processing units (CPUs) and GPUs, hence, this would make PhysX middle-ware much more universal, something, which is important to compete against providers of competing engines, such as Havok, a division of Intel Corp.

However, Mr. Mohammad warned about possible performance issues with non-Nvidia hardware, claiming that ATI is much behind Nvidia when it comes to GPU computing.

“If we start using OpenCL, then there's a chance that the features would work on ATI, but I have no idea what the performance would be like. Previously, looking at things like Folding@home, ATI GPU computing performance seems to be behind Nvidia. That probably reflects the fact that their GPU computing solution is probably a couple of generations behind ours,” said Mr. Mohammad.

Nvidia did not elaborate when it plans to port PhysX to OpenCL."
 
“If we start using OpenCL, then there's a chance that the features would work on ATI, but I have no idea what the performance would be like. Previously, looking at things like Folding@home, ATI GPU computing performance seems to be behind Nvidia. That probably reflects the fact that their GPU computing solution is probably a couple of generations behind ours,” said Mr. Mohammad.

."

He's so full of ****.
 
He's so full of ****.

Agreed. I think it's hilarious that he mentions one nvidia biased app (F@H) to justify their superiority in gpgpu. You could easily make the same argument for ATi having superior gpgpu perforce based off of their complete dominance of Milky Way@Home.
 
Agreed. I think it's hilarious that he mentions one nvidia biased app (F@H) to justify their superiority in gpgpu. You could easily make the same argument for ATi having superior gpgpu perforce based off of their complete dominance of Milky Way@Home.

Um, there is no "bias" to nvidia. The first gpu client was for ATI cards only. Nvidia cards run better in FaH is due to the math involved, and how the 2 cards process the math involved in doing the simulations.

Yes the statements crap, but not because of any bias to FaH program.
 
Um, there is no "bias" to nvidia. The first gpu client was for ATI cards only. Nvidia cards run better in FaH is due to the math involved, and how the 2 cards process the math involved in doing the simulations.

Yes the statements crap, but not because of any bias to FaH program.

I admit bias isn't the right word since the performance difference probably isn't intentional. However the point still stands that using a app like F@H, which performs disproportionally well on nvidia cards, to justify superior gpgpu performance is meaningless since you could make the same statement using a program dominated by ATI.
 
ATI could easily work with the F@H team to develop a app that works better with their cards, but they don't or should I say they won't.

ATI has said their gpu's could support Physics Processing and for a while it seemed like they were serious. ATI Takes on Physics - ATI Takes on Physics | [H]ard|OCP
But I guess it was just to much for them to continue with. Actually it seems that anything dealing with Software Development is against their normal operating procedures.

I think the statement was accurate. Until ATI is pressured (by it's consumers) to improve in it's Software Development, Nvidia is the King.
 
ATI could easily work with the F@H team to develop a app that works better with their cards, but they don't or should I say they won't.

It's not necessarily that simple. ATI's and Nvidia's architectures are radically different and what works well on one doesn't necessarily work well on the other.

ATI has said their gpu's could support Physics Processing and for a while it seemed like they were serious. ATI Takes on Physics - ATI Takes on Physics | [H]ard|OCP
But I guess it was just to much for them to continue with. Actually it seems that anything dealing with Software Development is against their normal operating procedures.

Any gpu, whether it's from ATI, Nvidia, or Intel is capable of supporting physics processing provided the physics engine is implemented using a vendor agnostic api like OpenCl or DirectX compute. The problem is no companies have done that, Nvidia didn't want anyone else to be able to use Physx so they implemented it using a proprietary api, CUDA, that can only run on their cards and then they crippled the cpu version so they could use Physx as a marketing tool. There was a rumor that Havoc was going to be released for OpenCL but when Intel canceled larrabee they no longer had a incentive to move forward with it.

I think the statement was accurate. Until ATI is pressured (by it's consumers) to improve in it's Software Development, Nvidia is the King.

Nvidia's dominance of the gpgpu market is little more than marketing. Also for the record I have heard that ATI has the more mature OpenCL compiler.
 
Slay, ATIs problem, in geting a better/faster cruncher was/is OpenCL got delayed, and still isn't ready for a science type application to rely on it.

FaH is tying to get a new client to better utilize ATI, but need OpenCL to do it.

Once thats all smoothed out, ATI will explode. Also ATI cards don't have the huge swings in processing, where as a nvidia can vary by 2 or 3000 pts.

But again, that statment by Mohomad was totally bad and a low blow to ATI.
 
ATI could easily work with the F@H team to develop a app that works better with their cards, but they don't or should I say they won't.

ATI has said their gpu's could support Physics Processing and for a while it seemed like they were serious. ATI Takes on Physics - ATI Takes on Physics | [H]ard|OCP
But I guess it was just to much for them to continue with. Actually it seems that anything dealing with Software Development is against their normal operating procedures.

I think the statement was accurate. Until ATI is pressured (by it's consumers) to improve in it's Software Development, Nvidia is the King.

Wow, I don't know where to start. There are so many things wrong with this statement.

I work with GPGPU research on the university level. So far we have a 64node cluster we built this month using Nvidia cards and this summer we will complete it's sister which will use ATI cards. Having worked with both on a software level in GPGPU I can tell you that ATI's development environment is far more mature to Nvidia's. ATI has a clear and effective compiler already several generations old that works very well, CUDA on the other hand is a mess. And that isn't bias talking. I used to think CUDA was a joke, but after using it for real stuff in an academic setting I can safely say it's a hacked together mess. I'm not the only one who thinks this either. The notion is held by all members of the team, that includes the several grad students and the PHD in charge. I come from a hardware background mostly so I don't know some of the finer points, but I can tell you that anyone who thinks Nvidia has better GPGPU technologies either doesn't know what they are talking about, or has some ulterior motives.

The only advantage that CUDA has is that ATI's compiler requires SSE2 and SSE3 for some unknown reason and CUDA doesn't. This means if you use old cpu's that act as hosts for modern video cards (a very cost and energy effective approach for a gpu cluster) then ATI can be difficult to work with. We are working with ATI on a solution. I'm not holding my breath though, and in the long run I doubt it's that big of a deal.

Slay, ATIs problem, in geting a better/faster cruncher was/is OpenCL got delayed, and still isn't ready for a science type application to rely on it.

Trust me, ATI is ready for scientific work. The only shock will be that the claimed power of these cards is way overstated, but this is also true of Nvidia. The HPC market has little tolerance for BS, hence the slow acceptance of GPGPU by the community. The graphics industry lives in a marketing driven culture. HPC is results driven, it's difficult to reconcile the two.

Once thats all smoothed out, ATI will explode. Also ATI cards don't have the huge swings in processing, where as a nvidia can vary by 2 or 3000 pts.

But again, that statment by Mohomad was totally bad and a low blow to ATI.

You are right. The problem doesn't lie with ATI's card or environment, it completely the developer's fault. In optimizing for the cards and optimizing for cards in general. With a gpu, the more time you can stay on the gpu and off the rest of the system the better. making calls to ram, the cpu or to the disks is expensive. The card's memory infrastructure is very rigid and the GDR is fast, if you can keep things local they scream in single precision FP. Double precision seems to be a secondary concern for the GPU guys as they want numbers, not accuracy. If you code it right, cpu usage should be low, in the single digit percentages. because of this we can make a cluster base don old Athlon XP cpus that performs just as fast as Core i7s. The cpu is irrelevant in GPGPU work, in fact I would argue that coding with a system with a powerful cpu would slow you down as it gives you reason to utilize the cpu which would incur losses from communication which destroys performance.

In short, don't blame ATI for F@H's bad coding.

As for Nvidia being off, that has a lot to do with them pushing an unstable monolithic gpu architecture. This can be alleviated by using more smaller cards, like we did. We went with 9500GTs for several reasons including that one. The are 1GB model and have few SPs. This gives us a memory heavy architecture which is great for GPGPU, when you have a strict memory system like in a gpu a lot of memory per SP is a good thing. This sounds counter intuitive I know because on a desktop 1GB in a little card is pointless. It's great for us though.
 
Status
Not open for further replies.
Back
Top Bottom