Tuesday, June 10, 2008

GPU Compute Cloud

Awhile ago, when first experiments with IBM/Sony Cell processor had brought some results (also confirmed by some scientist, who created a commodity cluster from Sony Playstation boxes), I noticed same efforts at Nvidia and ATI, I saw bright future for cloud computing solutions, specialized on specific tasks like image processing and other problems, usually computed on vector/stream processor units (or a combination of general purpose unit and stream/vector processing units).
Some background wikipedia information on available development platform:

CUDA ("Compute Unified Device Architecture"), is a GPGPU technology that allows a programmer to use the C programming language to code algorithms for execution on the graphics processing unit (GPU).

CUDA has been developed by Nvidia and to use this architecture requires an Nvidia GPU and special stream processing drivers. CUDA works with the new GeForce 8 Series, featuring G8X GPUs; Nvidia states that programs developed for the GeForce 8 series will also work without modification on all future Nvidia video cards[citation needed].

CUDA gives developers unfettered access to the native instruction set and memory of the massively parallel computational elements in CUDA GPUs. Using CUDA, Nvidia GeForce-based GPUs effectively become powerful, programmable open architectures like today’s CPUs (Central Processing Units).

By opening up the architecture, CUDA provides developers both with the low-level, deterministic, and the high-level API for repeatable access to hardware which is necessary to develop essential high-level programming tools such as compilers, debuggers, math libraries, and application platforms.

No comments: