OpenACC Announced

By

NVidia, Cray Inc., the Portland Group (PGI), and CAPS enterprise have partnered to develop a new parallel programming standard known as OpenACC.

OpenACC lets scientific and technical programmers more easily take advantage of the power of heterogeneous CPU/GPU computing systems. OpenACC allows programmers to provide simple hints, known as "directives," to the compiler, identifying which areas of code to accelerate, without requiring programmers to modify or adapt the underlying code itself. By exposing parallelism to the compiler, these directives allow the compiler to perform the detailed work of mapping the computation onto the accelerator.

Note that OpenACC is not being released as a competitor to OpenMP, a leading parallel programming standard. Instead, it is being developed as a sort of spin-off that will allow developers to incorporate feedback from the OpenACC project into the next OpenMP specification.

"The OpenACC announcement highlights the technically impressive initiative undertaken by members of the OpenMP Working Group on Accelerators," said Michael Wong, CEO of the OpenMP Architecture Review Board. "I look forward to working with all four companies within the OpenMP organization to merge OpenACC with other ideas to create a common specification which extends OpenMP to support accelerators."

Existing compilers from Cray, PGI, and CAPS are expected to provide support for the OpenACC standard beginning in the first quarter of 2012. The OpenACC standard is fully compatible and interoperable with the NVidia CUDA parallel programming architecture. More information about OpenACC is available at http://www.OpenACC-standard.org.

12/15/2011

Related content

comments powered by Disqus