GPU Accelerated Computing with Python

https://developer.nvidia.com/how-to-cuda-Python

python is one of the fastest growing and most popular programming languages available. However, as an interpreted language, it has been considered too slow for high-performance computing.  That has now changed with the release of the NumbaPro Python compiler from Continuum Analytics.

CUDA Python – Using the NumbaPro Python compiler, which is part of the Anaconda Accelerate package from Continuum Analytics, you get the best of both worlds: rapid iterative development and all other benefits of Python combined with the speed of a compiled language targeting both CPUs and NVIDIA GPUs.

Getting Started

  1. If you are new to Python, the python.org website is an excellent source for getting started material.
  2. Read this blog post if you are unsure what CUDA or GPU Computing is all about.
  3. Try CUDA by taking a self-paced lab on nvidia.qwiklab.com. These labs only require a supported web browser and a network that allows Web Sockets. Click here to verify that your network & system support Web Sockets in section "Web Sockets (Port 80)", all check marks should be green.
  4. Watch the first CUDA Python CUDACast:

     

  5. Install Anaconda Accelerate
  6. First install the free Anaconda package from this location.
  7. Once Anaconda is installed, you can install a trial-version of the Accelerate package by using Anaconda’s package manager and running conda install accelerate.  See here for more detailed information.  Please note that the Anaconda Accelerate package is free for Academic use.

Learning CUDA

  1. For documentation, see the Continuum website for these various topics:
    • Learn more about libraries
    • See how to use vectorize to automatically accelerate functions
    • Writing CUDA directly in Python code
  2. Browse through the following code examples:
  3. Browse and ask questions on NVIDIA’s DevTalk forums, or ask at stackoverflow.com.

So, now you’re ready to deploy your application?
You can register today to have FREE access to NVIDIA TESLA K40 GPUs.
Develop your codes on the fastest accelerator in the world. Try a Tesla K40 GPU and accelerate your development.

Performance/Results

  • It’s possible to get enormous speed-up, 20x-2000x, when moving from a pure Python application to accelerating the critical functions on the GPUs.  In many cases, with little changes required in the code.  Some simple examples demonstrating this can be found here:
    1. A MandelBrot example accelerated with CUDA Python.  19x speed-up over the CPU-only accelerated version using GPUs and a 2000x speed-up over pure interpreted Python code.
    2. A Monte Carlo Option Pricer example accelerated with CUDA Python.  Achieved a 30x speed-up over interpreted Python code after accelerating on the GPU.

Alternative Solution - PyCUDA

Another option for accelerating Python code on a GPU is PyCUDA.  This library allows you to call the CUDA Runtime API or kernels written in CUDA C from Python and execute them on the GPU.  One use case for this is using Python as a wrapper to your CUDA C kernels for rapid development and testing.

原文地址:https://www.cnblogs.com/dhcn/p/7130996.html