未加星标

NumPy GPU acceleration

字体大小 | |
[开发(python) 所属分类 开发(python) | 发布者 店小二03 | 时间 2016 | 作者 红领巾 ] 0人收藏点击收藏

I recently had to compute many inner products with a given matrix $\Ab$ for many different vectors $\xb_i$, or

. Each vector $\xb_i$ represents a shoe from Zappos and there are 50k vectors $\xb_i \in \R^{1000}$. This is computation took place behind a user-facing web interface and during testing had a delay of 5 minutes. This is clearly unacceptable; how can we make it faster?

I spent a couple hours trying to get the best possible performance from my functions… and through this, I found a speed optimizationthat put most of the computation on NumPy’s shoulders. After I made this change, the nave for-loop and NumPy were about a factor of 2 apart, not enough to write a blog post about.

Use of a NVIDIA GPU significantly outperformed NumPy. Given that most of the optimization seemed to be focused on a single matrix multiplication, let’s focus on speed in matrix multiplication.

We know that matrix multiplication has computational complexity of something like $O(n^{2.8074})$, but very likely greater than $O(n^{2.375477})$when multiplying two $n\times n$ matrices. We can’t get around this without diving into theory, but we can change the constant that dictates exactly how fast these algorithms run.

The tools I’ll test are

the default NumPy install, with no MKL (even though it’s now provided by default with Anaconda) Intel MKL , a tool that provides acceleration for BLAS/LAPACK the GPU. To do this, I’ll need an Amazon AWS machine and the NVIDIA CUDA Toolkit. An easy interface is available through cudamat but scikit-cuda and Accelerate also have nice interfaces and provide more access.

I had planned to test other tools but these tests didn’t pan out for reasons in the. My test script can be summarized as follows:

import numpy as np import cudamat as cm n, p = int(2e3), int(40e3) A = np.random.randn(n, p) B = np.random.randn(p, n) %timeit A @ B cm.cublas_init() cm.CUDAMatrix.init_random() A_cm = cm.empty((n, p)).fill_with_randn() B_cm = cm.empty((p, n)).fill_with_randn() %timeit A_cm.dot(B_cm) cm.cublas_shutdown()

When doing this, I generate the following graph:


NumPy GPU acceleration
Environment NumPy + no MKL NumPy + MKL cudamat Time (seconds) 7.18 4.057 0.2898
Under the default Anaconda environment (i.e., with MKL), we see that our script runs 80% slower without MKL and has a 14x speedup under cudamat!

This simple test shows that using the GPU is powerful, but cudamat is limited in that it only provides basic mathematical capability for the GPU (the dot product is as far as it goes).

However, the libraries Accelerate and scikit-cuda use the GPU and both provide more complex mathematical functions, including fft , svd and eig .

Accelerate and scikit-learn are both fairly similar. In choosing whether to use Accelerate or scikit-learn, there are two obvious tradeoffs:

scikit-cuda has access to linear algebra functions (e.g., eig ) and Anaconda does not. However, access to these higher level mathematical functions comes through CULA , another framework that requires a license (free academic licenses are available). Anaconda can accept raw ndarray s and scikit-cuda needs to have gpuarray s passed in (meaning more setup/cleanup).

Whichever is chosen, large speed enhancements exist. I have timed a common function ( fft ) over different values of n ; there is some overhead to moving to the GPU and I wanted to see where that is. I provide a summary of my testing script in the.


NumPy GPU acceleration

CULA has benchmarks for a few higher-level mathematical functions (source: the CULA Dense homepage ):


NumPy GPU acceleration
Appendix Other untested GPU libraries PyCUDA and PyOpenCL are not tested because they require C++ code ( PyCUDA example , PyOpenCL example ). gnumpy not tested becuase it didn’t support python 3 and hasn’t been touched in 4 years I tried to install cudarray but ran into install difficulties theano supports the GPU (see “ Using the GPU ”) but not tested this seems to be primarily a machine learning library

…and of course I didn’t optimize any loop-based functions. To do optimize loop speed, I would look at numba first and then possibly Cython .

FFT timing script summary In this script, I show preparing for the FFT and preparing for linear algebra functions (e.g., cilinalg.init() ). I found that it’s useful to look at the

本文开发(python)相关术语:python基础教程 python多线程 web开发工程师 软件开发工程师 软件开发流程

主题: CUC++mathematicaPythonZappos
分页:12
转载请注明
本文标题:NumPy GPU acceleration
本站链接:http://www.codesec.net/view/481063.html
分享请点击:


1.凡CodeSecTeam转载的文章,均出自其它媒体或其他官网介绍,目的在于传递更多的信息,并不代表本站赞同其观点和其真实性负责;
2.转载的文章仅代表原创作者观点,与本站无关。其原创性以及文中陈述文字和内容未经本站证实,本站对该文以及其中全部或者部分内容、文字的真实性、完整性、及时性,不作出任何保证或承若;
3.如本站转载稿涉及版权等问题,请作者及时联系本站,我们会及时处理。
登录后可拥有收藏文章、关注作者等权限...
技术大类 技术大类 | 开发(python) | 评论(0) | 阅读(52)