未加星标

Pytorch Extension with a Makefile

字体大小 | |
[系统(linux) 所属分类 系统(linux) | 发布者 店小二05 | 时间 2019 | 作者 红领巾 ] 0人收藏点击收藏

Pytorch is a great neural network library that has both flexibility and power. Personally, I think it is the best neural network library for prototyping (advanced) dynamic neural networks fast and deploying it to applications.

Recently, pytorch was upgraded to version 1.0 and introduced the ATen tensor library for all backend and c++ custom extension. Before the c++ extension, it supported CFFI (C Foreign Function Import) for a custom extension.

As an avid CUDA developer, I created multiple projects to speed up custom pytorch layers using the CFFI interface. However, wrapping functions with a non object oriented program (C) sometimes led to a ridiculous overhead when complex objects are required. Now that it supports the latest technology from 2011, c++11, we can now use object oriented programming for pytorch extensions!

In this tutorial, I will cover soe drawbacks of the current setuptools and will show you how to use a Makefile for pytorch cpp extension development. The source codes for the tutorial can be found here .

Before you proceed, please read the official Pytorch CPP extension guide which provides an extensive and useful tutorial for how to create a C++ extension with ATen.

Drawbacks of Setuptools for Development

However, the setuptool is not really flexible as it primarily focuses on the deployment of a project. Thus, it lacks a lot of features that are essential for fast development. Let’s delve into few scenarios that I encountered while I was porting my pytorch cffi extensions to cpp extensions.

Compile only updated files

When you develop a huge project, you don’t want to compile the entire project everytime you make a small change. However, if you use the setuptool, it creates objects for ALL source files, everytime you make a change. This becomes extremely cumbersome especially when your project gets larger.

However, Makefile allows you to cache all object files as you have control over all files and compile only the files that are updated. This is extremely useful if you made a small change to one file and want to quickly debug your project.

Parallel Compilation

Another problem with the setuptool is that it compiles files sequentially. When your project gets huge, you might want to compile a lot of files in parallel. With a Makefile, you can parallelize compilation with the -j# flag. For example, if you type make -j8 , it would compile 8 files in parallel.

Debugging

The current pytorch c++ extension does not allow debugging even with the debug flag. Instead, with a Makefile, you could pass -g (or -g -G for nvcc) with ease. In the Makefile , uncomment the line 3 DEBUG=1 and the line 20 of setup.py .

Making a pytorch extension with a Makefile

Now that we covered some of advantages of using a Makefile for a pytorch cpp extension, let’s get into the details of how to make one.

Creating Objects and Functions

As an example, in this tutorial, we will create a class and a cuda function that are callable in python. First, let’s make a simple class that provides a setter and a getter for a private variable key_ .

class Foo { private: uint64_t key_; public: void setKey(uint64_t key); uint64_t getKey(); std::string toString() const { return "< Foo, key: " + std::to_string(key_) + " > "; }; };

We will fill out the setter and the getter functions in foo.cpp . Next, I created a simple CUDA function that adds two vectors and returns results in a new at::Tensor .

template <typename Dtype> __global__ void sum(Dtype *a, Dtype *b, Dtype *c, int N) { int i = blockIdx.x * blockDim.x + threadIdx.x; if (i <= N) { c[i] = a[i] + b[i]; } } template <typename Dtype> void AddGPUKernel(Dtype *in_a, Dtype *in_b, Dtype *out_c, int N, cudaStream_t stream) { sum<Dtype> <<<GET_BLOCKS(N), CUDA_NUM_THREADS, 0, stream>>>(in_a, in_b, out_c, N); cudaError_t err = cudaGetLastError(); if (cudaSuccess != err) throw std::runtime_error(Formatter() << "CUDA kernel failed : " << std::to_string(err)); }

Note that I call std::runtime_error when it gives an error. Pybind11 automatically converts std exceptions to python exception types. For example, the std::runtime_error will be mapped to RuntimeError in python. This prevents the system from crashing and allows the python to handle errors gracefully. More error handling with pybind11 can be found at here .

Bridging CPP with Pybind

Pytorch passes tensors as the at::Tensor type. To extract the mutable raw pointer, use .data<Dtype>() . For example, if you want to extract the raw pointer from a variable A of type float , use A.data<float>() . In addition, if you want to use the CUDA stream for the current context, use the function at::cuda::getCurrentCUDAStream() .

template <typename Dtype> void AddGPU(at::Tensor in_a, at::Tensor in_b, at::Tensor out_c) { int N = in_a.numel(); if (N != in_b.numel()) throw std::invalid_argument(Formatter() << "Size mismatch A.numel(): " << in_a.numel() << ", B.numel(): " << in_b.numel()); out_c.resize_({N}); AddGPUKernel<Dtype>(in_a.data<Dtype>(), in_b.data<Dtype>(), out_c.data<Dtype>(), N, at::cuda::getCurrentCUDAStream()); }

The above function can be directly called from python with pybind11. Now, let’s bind the cpp function and the class with python.

namespace py = pybind11; PYBIND11_MODULE(TORCH_EXTENSION_NAME, m){ std::string name = std::string("Foo"); py::class_<Foo>(m, name.c_str()) .def(py::init<>()) .def("setKey", &Foo::setKey) .def("getKey", &Foo::getKey) .def("__repr__", [](const Foo &a) { return a.toString(); }); m.def("AddGPU", &AddGPU<float>); }

For classes, need to use py::class_<CLASS> to let the pybind to know it is a class. Then, define functions that you want to expose to python with .def . For functions, you can simply attach the function using .def directly.

Compiling the project

Now that we have all source files ready, let’s compile them. First, we will make an archive library that contains all classes and functions. Then, we can compile the file that bind all functions and classes with pybind11 with the setuptools and load it in python.

Finding the Arguments and the Include Paths

First, we have to find the right arguments used to compile the pytorch extension. It is actually easy to find since when you compile your project using the setuptools, you can see the actual compilation command that it invokes and we can deduce what would be required to make a project from a Makefile. Extra arguments that it uses for pytorch extensions are

-DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=$(EXTENSION_NAME) -D_GLIBCXX_USE_CXX11_ABI=0

In addition, we need to find headers. The pytorch provides CPP extensions with setuptools and we could see how it finds the headers and libraries. In torch.utils.cpp_extension you can find the function include_paths , which provides all header paths. We only need to pass it to the Makefile. Within a Makefile, we can run a python command and get the paths like the following.

PYTHON_HEADER_DIR := $(shell python -c 'from distutils.sysconfig import get_python_inc; print(get_python_inc())')

Note that the command above prints out all paths line by line, so, in the end, we can iterate over the paths in the Makefile to prepend -I . The final makefile can be found at here .

Archive Libraries

Once we build library files, we create an archive file and link it to the main pybind entries. We can do so by

ar rc $(STATIC_LIB) $(OBJS) $(CU_OBJS) Compiling the bind file

When the archive library is ready, we can finally compile the bind file that will linke the classes and functions with the binding.

from setuptools import setup setup( ... ext_modules=[ CUDAExtension( name='MakePytorchBackend', include_dirs=['./'], sources=[ 'pybind/bind.cpp', ], libraries=['make_pytorch'], library_dirs=['objs'], # extra_compile_args=['-g'] ) ], cmdclass={'build_ext': BuildExtension}, ... )

Finally, if we automatically call the setup.py in python, we can only issue make -j8 to compile all files, binding and install it in the python library.

all: $(STATIC_LIB) python setup.py install --force

本文系统(linux)相关术语:linux系统 鸟哥的linux私房菜 linux命令大全 linux操作系统

代码区博客精选文章
分页:12
转载请注明
本文标题:Pytorch Extension with a Makefile
本站链接:https://www.codesec.net/view/628320.html


1.凡CodeSecTeam转载的文章,均出自其它媒体或其他官网介绍,目的在于传递更多的信息,并不代表本站赞同其观点和其真实性负责;
2.转载的文章仅代表原创作者观点,与本站无关。其原创性以及文中陈述文字和内容未经本站证实,本站对该文以及其中全部或者部分内容、文字的真实性、完整性、及时性,不作出任何保证或承若;
3.如本站转载稿涉及版权等问题,请作者及时联系本站,我们会及时处理。
登录后可拥有收藏文章、关注作者等权限...
技术大类 技术大类 | 系统(linux) | 评论(0) | 阅读(212)