Is Your Son a Computer Hacker?

Status
Not open for further replies.
CUDA is an x86 compatible api, its not very hard to code for. Its based off of c as well.

Id also like to see what sources you get this conclusion from. Oh and you cant compare Larabee to current hardware, as it will be coming 1-2 years from now. And a stream processor in a gpu is a core under CUDA.

Nvidia sp's are very strong...as they beat out ATI sp's clock for clock.

Larrabee's "pixel/vertex shaders" are implemented by the in-order cores described in the previous article. Note that in the previous article, I stated that a Larrabee GPU product would have at least 10 such cores. The new slide says that Larrabee products will have from 16 to 24 cores and adds the detail that these cores will operate at clockspeeds between 1.7 and 2.5GHz (150W minimum). The number of cores on each chip, as well as the clockspeed, will vary with each product, depending in its target market (mid-range GPU, high-end GPU, HPC add-in board, etc.).
(Larabee) 24 cores at 2.5ghz versus (GTX 280) 240 cores at 1.5ghz

I dont see how Larabee is much faster. And remember CUDA has nothing to do with directx..it bypasses all of that and treats a gpu as a small cluster of x86 processors.
 
Nvidia's SP's may be strong, but a Processor core is far more sophisticated than a Stream Processor. That's just a given notion. Compare one Nvidia SP to a Pentium 2 processor (since that is essentially what each core of Larrabee is).

CUDA may "support" x86, but Larrabee IS x86. There are compilers for x86 for just about every language ever made. CUDA just has C. Which is terribly outdated. its not even OO and CUDA doesn't support things like recursion that are pretty common place in today's code.

Also, for some reason Nvidia didn't give CUDA SLI support. I can't imagine why, but that will undoubtedly hurt them against Larrabee on the GPGPU front.
 
Well while CUDA does not require SLI....It can still read multiple gpu's in a system, like the FASTRA which has four 9800GX2's...so 8 gpu's.

And applications can still take advantage of multiple gpu's in CUDA without SLI. For example F@H doesn't support multiple cards natively yet..but you can trigger two instances to open using two shortcuts and program folders named differently.

It is possible to run 8 clients inside a system (takes some work getting it setup) but it works.


CUDA bypasses the native graphics rendering of a gpu, and treats it fully as a cpu...each core is capable of thousands of threads.

And lots of applications and uses are being developed under it....im really excited about it myself. I tried the badaboom media converter and it encodes video incredibly fast.

CUDA Zone -- The resource for CUDA developers
 
Still the fact that it is C is a major drawback in my opinion. They should have at least used C++ since it is object oriented, which is a much better approach to programing than procedural.
 
Man its like you guys are trying to nitpick at every little thing:


  • Parallel bitonic sort
  • Matrix multiplication
  • Matrix transpose
  • Performance profiling using timers
  • Parallel prefix sum (scan) of large arrays
  • Image convolution
  • 1D DWT using Haar wavelet
  • OpenGL and Direct3D graphics interoperation examples
  • CUDA BLAS and FFT library usage examples
  • CPU-GPU C- and C++-code integration
  • Binomial Option Pricing
  • Black-Scholes Option Pricing
  • Monte-Carlo Option Pricing
  • Parallel Mersenne Twister (random number generation)
  • Parallel Histogram
  • Image Denoising
  • Sobel Edge Detection Filter
  • MathWorks MATLAB® Plug-in (click here to download)
<!-- New SDK code samples in CUDA version 1.1 is available now. To browse the complete list and download SDK sample codes, click here. Installation of the CUDA toolkit is required before running these precompiled examples.
--> SDK code samples are available for download. Installation of the CUDA toolkit is required before running these precompiled examples.
 
I'm not saying CUDA is bad. It has a lot of cool features. Just when you put it up against a full x86 processor I don't see its advantage. Larrabee brings the best of both worlds together and I think it will be very influential over GPU designs for the next few generations.
 
Status
Not open for further replies.
Back
Top Bottom