Thrust Quick Start Guide

User Manual:

Open the PDF directly: View PDF PDF.
Page Count: 20

CUDA Toolkit 4.0
Thrust Quick Start Guide
PG-05688-040_v01 | January 2011
Document Change History
Version Date Authors Description of Change
01 2011/1/13 NOJW Initial import from wiki.
CUDA Toolkit 4.0 Thrust Quick Start Guide PG-05688-040_v01 | ii
Introduction
Thrust is a C++ template library for CUDA based on the Standard Template Library
(STL). Thrust allows you to implement high performance parallel applications with
minimal programming effort through a high-level interface that is fully interoperable with
CUDA C.
Thrust provides a rich collection of data parallel primitives such as scan, sort, and reduce,
which can be composed together to implement complex algorithms with concise, readable
source code. By describing your computation in terms of these high-level abstractions you
provide Thrust with the freedom to select the most efficient implementation automatically.
As a result, Thrust can be utilized in rapid prototyping of CUDA applications, where
programmer productivity matters most, as well as in production, where robustness and
absolute performance are crucial.
This document describes how to develop CUDA applications with Thrust. The tutorial is
intended to be accessible, even if you have limited C++ or CUDA experience.
Installation and Versioning
Installing the CUDA Toolkit will copy Thrust header files to the standard CUDA include
directory for your system. Since Thrust is a template library of header files, no further
installation is necessary to start using Thrust.
In addition, new versions of Thrust continue to be available online through the Google
Code Thrust project page. The version of Thrust included in this version of the CUDA
Toolkit corresponds to version 1.4.0 from the Thrust project page.
CUDA Toolkit 4.0 Thrust Quick Start Guide PG-05688-040_v01 | 3
Vectors
Thrust provides two vector containers, host_vector and device_vector. As the names
suggest, host_vector is stored in host memory while device_vector lives in GPU device
memory. Thrust’s vector containers are just like std::vector in the C++ STL. Like
std::vector,host_vector and device_vector are generic containers (able to store any
data type) that can be resized dynamically. The following source code illustrates the use of
Thrust’s vector containers.
#include < t hru st / h os t_ v ec to r . h >
#include < thrust / devi ce _ve ct or .h >
#include <iostream >
int main(void)
{
// H has storage for 4 i nt eg er s
t hr u st : : h os t_ v ec t or < int > H (4) ;
// init ia lize i ndi vi du al e le me nt s
H [0] = 14 ;
H [1] = 20 ;
H [2] = 38 ;
H [3] = 46 ;
// H. s iz e () re tur ns the siz e of vec tor H
std :: cout << "H has size " < < H. s iz e () << std :: e ndl ;
// print contents of H
for (int i = 0; i < H . size () ; i ++)
std :: cout << "H [ " << i << "] = " << H[ i] << std :: e ndl ;
// resize H
H . res iz e ( 2) ;
std :: cout << "H now has size " < < H. s iz e () << std :: e ndl ;
// Copy ho st _ve ct or H to d evi ce _ ve cto r D
t hr u st : : d e vi c e_ v ec to r < int > D = H;
// elements of D can be modifie d
D [0] = 99 ;
D [1] = 88 ;
// print contents of D
for (int i = 0; i < D . size () ; i ++)
CUDA Toolkit 4.0 Thrust Quick Start Guide PG-05688-040_v01 | 4
std :: cout << "D [ " << i << "] = " << D[ i] << std :: e ndl ;
// H and D are aut oma ti c al ly d eleted when the f unction returns
return 0;
}
As this example shows, the =operator can be used to copy a host_vector to a
device_vector (or vice-versa). The =operator can also be used to copy host_vector to
host_vector or device_vector to device_vector. Also note that individual elements of
adevice_vector can be accessed using the standard bracket notation. However, because
each of these accesses requires a call to cudaMemcpy, they should be used sparingly. We’ll
look at some more efficient techniques later.
It’s often useful to initialize all the elements of a vector to a specific value, or to copy only
a certain set of values from one vector to another. Thrust provides a few ways to do these
kinds of operations.
#include < t hru st / h os t_ v ec to r . h >
#include < thrust / devi ce _ve ct or .h >
#include < th ru st / cop y .h >
#include < th ru st / fil l .h >
#include < t hru st / se qu enc e . h >
#include <iostream >
int main(void)
{
// init ia lize all ten i nt eg er s of a d evi ce_ vec to r to 1
t hr u st : : d e vi c e_ v ec to r < int > D (10 , 1) ;
// set the f irst se ven el em ents of a vecto r to 9
thrus t :: fi ll ( D. be gin () , D . beg in () + 7, 9) ;
// in it ializ e a hos t_ vec to r with the f irs t five e le me nt s of D
t hr u st : : h os t_ v ec t or < int > H(D . begi n () , D. b egin () + 5) ;
// set the el em en ts of H to 0 , 1 , 2 , 3 , ...
th rus t :: s eq ue nc e ( H . be gi n () , H . end () ) ;
// copy all of H back to the b eg innin g of D
thrus t :: co py ( H. be gin () , H . end () , D . begi n () ) ;
// pr int D
for (int i = 0; i < D . size () ; i ++)
std :: cout << "D [ " << i << "] = " << D[ i] << std :: e ndl ;
return 0;
}
Here we’ve illustrated use of the fill,copy, and sequence functions. The copy function
can be used to copy a range of host or device elements to another host or device vector.
Like the corresponding STL function, thrust::fill simply sets a range of elements to a
specific value. Thrust’s sequence function can be used to a create a sequence of equally
spaced values.
CUDA Toolkit 4.0 Thrust Quick Start Guide PG-05688-040_v01 | 5
Thrust Namespace
You’ll notice that we use things like thrust::host_vector or thrust::copy in our
examples. The thrust:: part tells the C++ compiler that we want to look inside the
thrust namespace for a specific function or class. Namespaces are a nice way to avoid
name collisions. For instance, thrust::copy is different from std::copy provided in the
STL. C++ namespaces allow us to distinguish between these two copy functions.
Iterators and Static Dispatching
In this section we used expressions like H.begin() and H.end() or offsets like
D.begin() + 7. The result of begin() and end() is called an iterator in C++. In the
case of vector containers, which are really just arrays, iterators can be thought of as
pointers to array elements. Therefore, H.begin() is an iterator that points to the first
element of the array stored inside the Hvector. Similarly, H.end() points to the element
one past the last element of the Hvector.
Although vector iterators are similar to pointers they carry more information with them.
Notice that we did not have to tell thrust::fill that it was operating on a
device_vector iterator. This information is captured in the type of the iterator returned
by D.begin() which is different than the type returned by H.begin(). When a Thrust
function is called, it inspects the type of the iterator to determine whether to use a host or
a device implementation. This process is known as static dispatching since the host/device
dispatch is resolved at compile time. Note that this implies that there is no runtime
overhead to the dispatch process.
You may wonder what happens when a “raw” pointer is used as an argument to a Thrust
function. Like the STL, Thrust permits this usage and it will dispatch the host path of the
algorithm. If the pointer in question is in fact a pointer to device memory then you’ll need
to wrap it with thrust::device_ptr before calling the function. For example:
size_ t N = 10;
// raw p oi nt er to device me mory
int * raw_ptr;
cu da Mallo c (( void **) & raw_pt r , N * sizeof(int ));
// wrap raw poin te r with a d ev ice _p tr
t hr u st : : d ev ic e_ pt r < int > dev_ptr(raw_ptr);
// use d evi ce _ptr in thrust alg or ithms
th rus t :: fi ll ( dev _p tr , d ev _p tr + N , ( int ) 0) ;
To extract a raw pointer from a device_ptr the raw_pointer_cast should be applied as
follows:
CUDA Toolkit 4.0 Thrust Quick Start Guide PG-05688-040_v01 | 6
size_ t N = 10;
// cr eate a d ev ice _p tr
t hr u st : : d ev ic e_ pt r < int > d ev _ pt r = t h ru s t : : d e vi c e_ m al lo c < int >( N ) ;
// extract raw p oi nt er from d evi ce _ptr
int * raw_ptr = thru st :: r aw_ poi nte r_c ast ( de v_ pt r );
Another reason to distinguish between iterators and pointers is that iterators can be used
to traverse many kinds of data structures. For example, the STL provides a linked list
container (std::list) that provides bidirectional (but not random access) iterators.
Although Thrust does not provide device implementations of such containers, it is
compatible with them.
#include < thrust / devi ce _ve ct or .h >
#include < th ru st / cop y .h >
#include < list >
#include < vector >
int main(void)
{
// crea te an STL list with 4 va lues
std :: li st < int > stl _list ;
stl_li st . p us h_bac k (10) ;
stl_li st . p us h_bac k (20) ;
stl_li st . p us h_bac k (30) ;
stl_li st . p us h_bac k (40) ;
// in it ializ e a dev ice _ve ct or with the li st
t hr u st : : d e vi c e_ v ec to r < int > D ( stl _l ist . b eg in () , s tl _list . end () );
// copy a de vic e_v ec tor into an STL ve ct or
std :: ve ct or < int > s tl _ ve ct o r ( D. s ize () ) ;
thrus t :: co py ( D. be gin () , D . end () , s tl _ve cto r . begi n () ) ;
return 0;
}
For Future Reference: The iterators we’ve covered so far are useful, but fairly basic. In
addition to these normal iterators, Thrust also provides a collection of fancy iterators with
names like counting_iterator and zip_iterator. While they look and feel like normal
iterators, fancy iterators are capable of more exciting things. We’ll revisit this topic later
in the tutorial.
CUDA Toolkit 4.0 Thrust Quick Start Guide PG-05688-040_v01 | 7
Algorithms
Thrust provides a large number of common parallel algorithms. Many of these algorithms
have direct analogs in the STL, and when an equivalent STL function exists, we choose the
name (e.g. thrust::sort and std::sort).
All algorithms in Thrust have implementations for both host and device. Specifically,
when a Thrust algorithm is invoked with a host iterator, then the host path is dispatched.
Similarly, a device implementation is called when a device iterator is used to define a range.
With the exception of thrust::copy, which can copy data between host and device, all
iterator arguments to a Thrust algorithm should live in the same place: either all on the
host or all on the device. When this requirement is violated the compiler will produce an
error message.
Transformations
Transformations are algorithms that apply an operation to each element in a set of (zero or
more) input ranges and then stores the result in a destination range. One example we have
already seen is thrust::fill, which sets all elements of a range to a specified value.
Other transformations include thrust::sequence,thrust::replace, and of course
thrust::transform. Refer to the documentation for a complete listing.
The following source code demonstrates several of the transformation algorithms. Note
that thrust::negate and thrust::modulus are known as functors in C++ terminology.
Thrust provides these and other common functors like plus and multiplies in the file
thrust/functional.h.
#include < thrust / devi ce _ve ct or .h >
#include < thrust / transf or m .h >
#include < t hru st / se qu enc e . h >
#include < th ru st / cop y .h >
#include < th ru st / fil l .h >
#include < thr us t/ replace .h >
#include < thrust / functi on al .h >
#include <iostream >
CUDA Toolkit 4.0 Thrust Quick Start Guide PG-05688-040_v01 | 8
int main(void)
{
// al lo ca te three dev ice _ve ct o rs wit h 10 el em en ts
t hr u st : : d e vi c e_ v ec to r < int > X (1 0) ;
t hr u st : : d e vi c e_ v ec to r < int > Y (1 0) ;
t hr u st : : d e vi c e_ v ec to r < int > Z (1 0) ;
// in iti al ize X to 0 ,1 ,2 ,3 , .. ..
th rus t :: s eq ue nc e ( X . be gi n () , X . end () ) ;
// com pu te Y = -X
thrus t :: t ran sf orm ( X. b eg in () , X . end () , Y. b eg in () , th ru st :: negate < int >() ) ;
// fill Z with twos
thrus t :: fi ll ( Z. be gin () , Z . end () , 2) ;
// com pu te Y = X mod 2
thrus t :: t ran sfo rm ( X. beg in () , X. end () , Z. b egin () , Y. b eg in () , th rust :: m odul us < int -
>() ) ;
// rep la ce all the ones in Y with ten s
th ru st :: r epl ac e ( Y. b eg in () , Y. en d () , 1 , 10 ) ;
// pr int Y
thrus t :: co py ( Y. be gin () , Y . end () , std :: os tre am _it er ato r < int >( std :: cout , "\n"));
return 0;
}
While the functors in thrust/functional.h cover most of the built-in arithmetic and
comparison operations, we often want to do something different. For example, consider the
vector operation y<-a*x+ywhere xand yare vectors and ais a scalar constant.
This is the well-known SAXPY operation provided by any BLAS library.
If we want to implement SAXPY with Thrust we have a few options. The first is to use two
transformations (one addition and one multiplication) and a temporary vector filled with
the value a. A better choice is to use a single transformation with a user-defined functor
that does exactly what we want. We illustrate both approaches in the source code below.
struct saxpy_functor
{
con st floa t a;
saxpy_functor(float _a ) : a( _a ) {}
__host __ _ _d evi ce __
flo at o perator () ( c on st flo at & x , co ns t f lo at & y ) cons t {
return a * x + y;
}
};
void sa xpy _fa st ( float A , th rus t :: d ev ic e_v ec to r < float > & X , thr ust :: de vic e_v ect or <-
float >& Y)
{
// Y <- A * X + Y
thrus t :: t ran sfo rm ( X. beg in () , X. end () , Y. b egin () , Y. b eg in () , s axp y_f un ct or ( A) );
}
CUDA Toolkit 4.0 Thrust Quick Start Guide PG-05688-040_v01 | 9
void sa xpy _sl ow ( float A , th rus t :: d ev ic e_v ec to r < float > & X , thr ust :: de vic e_v ect or <-
float >& Y)
{
th ru st :: d ev ic e_ ve cto r < float > t em p (X . siz e () ) ;
// temp <- A
th ru st :: fill ( t em p . be gi n () , tem p .e nd () , A) ;
// temp <- A * X
thrus t :: t ran sfo rm ( X. beg in () , X. end () , temp . be gi n () , temp . be gi n () , thru st ::-
mu lt ipl ie s < float >() ) ;
// Y <- A * X + Y
thrus t :: t ran sf orm ( t emp . b egin () , temp . end () , Y. be gi n () , Y. begi n () , t hr us t :: plus <-
float >() ) ;
}
Both saxpy_fast and saxpy_slow are valid SAXPY implementations, however
saxpy_fast will be significantly faster than saxpy_slow. Ignoring the cost of allocating
the temp vector and the arithmetic operations we have the following costs:
Ifast_saxpy: performs 2N reads and N writes
Islow_saxpy: performs 4N reads and 3N writes
Since SAXPY is memory bound (its performance is limited by memory bandwidth, not
floating point performance) the larger number of reads and writes makes saxpy_slow much
more expensive. In contrast, saxpy_fast will perform about as fast as SAXPY in an
optimized BLAS implementation. In memory bound algorithms like SAXPY it is generally
worthwhile to apply kernel fusion (combining multiple operations into a single kernel) to
minimize the number of memory transactions.
thrust::transform only supports transformations with one or two input arguments (e.g.
f(x)yand f(x, y)z). When a transformation uses more than two input arguments
it is necessary to use a different approach. The arbitrary_transformation example
demonstrates a solution that uses thrust::zip_iterator and thrust::for_each.
Reductions
A reduction algorithm uses a binary operation to reduce an input sequence to a single
value. For example, the sum of an array of numbers is obtained by reducing the array with
a plus operation. Similarly, the maximum of an array is obtained by reducing with a
operator that takes two inputs and returns the maximum. The sum of an array is
implemented with thrust::reduce as follows:
int sum = th rus t :: r edu ce ( D. be gi n () , D. end () , ( i nt ) 0 , t h ru s t : : pl us < int >() ) ;
The first two arguments to reduce define the range of values while the third and fourth
parameters provide the initial value and reduction operator respectively. Actually, this
CUDA Toolkit 4.0 Thrust Quick Start Guide PG-05688-040_v01 | 10
kind of reduction is so common that it is the default choice when no initial value or
operator is provided. The following three lines are therefore equivalent:
int sum = th rus t :: r edu ce ( D. be gi n () , D. end () , ( i nt ) 0 , t h ru s t : : pl us < int >() ) ;
int sum = th rus t :: r edu ce ( D. be gi n () , D. end () , ( i nt ) 0) ;
int sum = th ru st :: re du ce ( D. begi n () , D. end () )
Although thrust::reduce is sufficient to implement a wide variety of reductions, Thrust
provides a few additional functions for convenience (like the STL). For example,
thrust::count returns the number of instances of a specific value in a given sequence:
# in cl ude < thr us t / co un t .h >
# inc lu de < thr ust / devi ce _ve ct or .h >
...
// put t hree 1 s in a de vic e_ v ec tor
t hr u st : : d e vi c e_ v ec to r < int > vec ( 5 ,0) ;
vec [1] = 1;
vec [3] = 1;
vec [4] = 1;
// count the 1 s
int re sul t = t hru st :: co un t ( ve c . be gin () , vec . end () , 1) ;
// res ul t is thr ee
Other reduction operations include thrust::count_if,thrust::min_element,
thrust::max_element,thrust::is_sorted,thrust::inner_product, and several others.
Refer to the documentation for a complete listing.
The SAXPY example in the Transformations section showed how kernel fusion can be
used to reduce the number of memory transfers used by a transformation kernel. With
thrust::transform_reduce we can also apply kernel fusion to reduction kernels. Consider
the following example which computes the norm of a vector.
#include <thrust/transform_reduce.h>
#include < thrust / functi on al .h >
#include < thrust / devi ce _ve ct or .h >
#include < t hru st / h os t_ v ec to r . h >
#include <cmath >
// square < T > c om pu te s the square of a num be r f(x ) -> x* x
templa te <t yp en am e T >
struct square
{
__host __ _ _d evi ce __
Toperat or ( ) ( cons t T& x) con st {
return x * x;
}
};
int main(void)
{
// init ia lize host ar ray
flo at x [4] = {1.0 , 2.0 , 3.0 , 4.0};
CUDA Toolkit 4.0 Thrust Quick Start Guide PG-05688-040_v01 | 11
// transf er to device
th ru st :: d ev ic e_ ve cto r < float > d _x (x , x + 4 ) ;
// setup a rg ument s
square < float > unary_ op ;
th ru st :: plus < float > bi na ry _op ;
flo at init = 0;
// com pu te norm
flo at no rm = std :: sq rt ( th ru st :: t ra ns fo rm_ re du ce ( d_x . beg in () , d_ x . end () , -
unary_op , init , binary _o p ) );
std :: cout << norm << std :: endl ;
return 0;
}
Here we have a unary operator called square that squares each element of the input
sequence. The sum of squares is then computed using a standard plus reduction. Like the
slower version of the SAXPY transformation, we could implement norm with multiple
passes: first a transform using square or perhaps just multiplies and then a plus
reduction over a temporary array. However this would be unnecessarily wasteful and
considerably slower. By fusing the square operation with the reduction kernel we again
have a highly optimized implementation which offers the same performance as
hand-written kernels.
Prex-Sums
Parallel prefix-sums, or scan operations, are important building blocks in many parallel
algorithms such as stream compaction and radix sort. Consider the following source code
which illustrates an inclusive scan operation using the default plus operator:
#include < th ru st / sca n .h >
int data [6] = {1 , 0, 2 , 2 , 1, 3};
thrus t :: i n cl usi ve_ sca n (data , data + 6, data ) ; // in - pl ace scan
// data is now {1 , 1, 3 , 5 , 6 , 9}
In an inclusive scan each element of the output is the corresponding partial sum of the
input range. For example, data[2] = data[0] + data[1] + data[2]. An exclusive scan
is similar, but shifted by one place to the right:
#include < th ru st / sca n .h >
int data [6] = {1 , 0, 2 , 2 , 1, 3};
thrus t :: e x cl usi ve_ sca n (data , data + 6, data ) ; // in - pl ace scan
// data is now {0 , 1, 1 , 3 , 5 , 6}
CUDA Toolkit 4.0 Thrust Quick Start Guide PG-05688-040_v01 | 12
So now data[2] = data[0] + data[1]. As these examples show, inclusive_scan and
exclusive_scan are permitted to be performed in-place. Thrust also provides the
functions transform_inclusive_scan and transform_exclusive_scan which apply a
unary function to the input sequence before performing the scan. Refer to the
documentation for a complete list of scan variants.
Reordering
Thrust provides support for partitioning and stream compaction through the following
algorithms:
Icopy_if : copy elements that pass a predicate test
Ipartition : reorder elements according to a predicate (true values precede false
values)
Iremove and remove_if : remove elements that fail a predicate test
Iunique: remove consecutive duplicates within a sequence Refer to the documentation
for a complete list of reordering functions and examples of their usage.
Sorting
Thrust offers several functions to sort data or rearrange data according to a given
criterion. The thrust::sort and thrust::stable_sort functions are direct analogs of
sort and stable_sort in the STL.
# in clu de < th ru st / sor t .h >
...
con st int N = 6;
int A[N] = {1 , 4 , 2 , 8 , 5 , 7};
thrus t :: sort (A , A + N) ;
// A is now {1 , 2, 4, 5, 7 , 8}
In addition, Thrust provides thrust::sort_by_key and thrust::stable_sort_by_key,
which sort key-value pairs stored in separate places.
# in clu de < th ru st / sor t .h >
...
con st int N = 6;
int keys [ N ] = { 1, 4, 2 , 8 , 5 , 7};
char value s [N ] = { 'a','b','c','d','e','f'};
th rus t :: s or t_ by _k ey ( keys , key s + N , val ues ) ;
// keys is now { 1 , 2, 4, 5, 7, 8}
// v alues is now { 'a','c','b','e','f','d'}
CUDA Toolkit 4.0 Thrust Quick Start Guide PG-05688-040_v01 | 13
Like their STL brethren, the sorting functions also accept user-defined comparison
operators:
# in clu de < th ru st / sor t .h >
# inclu de < thr us t / func ti onal .h >
...
con st int N = 6;
int A[N] = {1 , 4 , 2 , 8 , 5 , 7};
th rus t :: s ta bl e_ so rt ( A , A + N , th ru st :: g re at er < int >() ) ;
// A is now {8 , 7, 5, 4, 2 , 1}
CUDA Toolkit 4.0 Thrust Quick Start Guide PG-05688-040_v01 | 14
Fancy Iterators
Fancy iterators perform a variety of valuable purposes. In this section we’ll show how
fancy iterators allow us to attack a broader class problems with the standard Thrust
algorithms. For those familiar with the Boost C++ Library, note that our fancy iterators
were inspired by (and generally derived from) those in Boost Iterator Library.
constant_iterator
Arguably the simplest of the bunch, constant_iterator is simply an iterator that returns
the same value whenever we access it. In the following example we initialize a constant
iterator with the value 10.
#include < thrust / iterator / co nst ant _it er ato r .h >
...
// cre ate i te rator s
thrust::constant_iterator <int > f ir st ( 10 ) ;
thrust::constant_iterator <int > last = firs t + 3;
fir st [0] // returns 10
fir st [1] // returns 10
fir st [1 00] // returns 10
// sum of [first , last )
thrus t :: reduc e ( first , last ); // returns 30 (i . e. 3 * 10)
Whenever an input sequence of constant values is needed, constant_iterator is a
convenient and efficient solution.
counting_iterator
If a sequence of increasing values is required, then counting_iterator is the appropriate
choice. Here we initialize a counting_iterator with the value 10 and access it like an
array.
CUDA Toolkit 4.0 Thrust Quick Start Guide PG-05688-040_v01 | 15
#include < thrust / iterator / co unt ing _it er ato r .h >
...
// cre ate i te rator s
thrust::counting_iterator <int > f ir st ( 10 ) ;
thrust::counting_iterator <int > last = firs t + 3;
fir st [0] // returns 10
fir st [1] // returns 11
fir st [1 00] // returns 110
// sum of [first , last )
thrus t :: reduc e ( first , last ); // ret ur ns 33 ( i .e. 10 + 11 + 12)
While constant_iterator and counting_iterator act as arrays, they don’t actually
require any memory storage. Whenever we dereference one of these iterators it generates
the appropriate value on-the-fly and returns it to the calling function.
transform_iterator
In the Algorithms section we spoke about kernel fusion, i.e. combining separate algorithms
like transform and reduce into a single transform_reduce operation. The
transform_iterator allows us to apply the same technique, even when we don’t have a
special transform_xxx version of the algorithm. This example shows another way to fuse
a transformation with a reduction, this time with just plain reduce applied to a
transform_iterator.
#include < thrust / iterator / tr ans for m_i ter ato r .h >
// init ia lize vector
t hr u st : : d e vi c e_ v ec to r < int > vec ( 3) ;
vec [0] = 10; vec [1] = 20; vec [2] = 30;
// cre at e it er ato r ( typ e omi tt ed )
... first = t hr us t :: m ak e_t ra nsf or m_i te rat or ( vec . be gi n () , negate < int >( ) );
... las t = th ru st :: m ak e_ tr an sf or m_ ite ra to r ( vec . en d () , negate < int > () ) ;
fir st [0] // ret ur ns -10
fir st [1] // ret ur ns -20
fir st [2] // ret ur ns -30
// sum of [first , last )
thrus t :: reduc e ( first , last ); // ret ur ns -60 ( i.e. -10 + -20 + -30)
Note, we have omitted the types for iterators first and last for simplicity. One downside
of transform_iterator is that it can be cumbersome to specify the full type of the
iterator, which can be quite lengthy. For this reason, it is common practice to simply put
the call to make_transform_iterator in the arguments of the algorithm being invoked.
For example,
// sum of [first , last )
thrus t :: reduc e ( thrust :: m ake _ tr ans for m_i ter a tor ( vec . begin () , negate < int >( ) ) ,
th ru st :: m ak e_ tr an sf orm _i te ra to r ( vec . end () , negat e < int >() ) );
CUDA Toolkit 4.0 Thrust Quick Start Guide PG-05688-040_v01 | 16
allows us to avoid creating a variable to store first and last.
permutation_iterator
In the previous section we showed how transform_iterator is used to fuse a
transformation with another algorithm to avoid unnecessary memory operations. The
permutation_iterator is similar: it allows us to fuse gather and scatter operations with
Thrust algorithms, or even other fancy iterators. The following example shows how to fuse
a gather operation with a reduction.
#include < thrust / it er at or / p erm uta tio n_i ter ato r .h >
...
// gat her l oc ation s
t hr u st : : d e vi c e_ v ec to r < int > map ( 4) ;
map [0] = 3;
map [1] = 1;
map [2] = 0;
map [3] = 5;
// array to ga ther from
t hr u st : : d e vi c e_ v ec to r < int > so ur ce (6) ;
sourc e [0] = 10;
sourc e [1] = 20;
sourc e [2] = 30;
sourc e [3] = 40;
sourc e [4] = 50;
sourc e [5] = 60;
// fuse gather with re ducti on :
// sum = s ource [ map [0]] + sourc e [map [1]] + ...
int sum = th rust :: r ed uc e ( th ru st :: m ake _per mu tat ion_ iter at or ( so ur ce . begi n () , map .-
be gi n ( ) ) ,
th ru st :: m ak e_ pe rm ut at io n_i te ra to r ( s ou rce . beg in () , ma p . end-
() ) );
Here we have used the make_permutation_iterator function to simplify the construction
of the permutation_iterators. The first argument to make_permutation_iterator is the
source array of the gather operation and the second is the list of map indices. Note that
we pass in source.begin() for the first argument in both cases, but vary the second
argument to define the beginning and end of the sequence.
When a permutation_iterator is used as an output sequence of a function it is equivalent
to fusing a scatter operation to the algorithm. In general permutation_iterator allows
you to operate on specific set of values in a sequence instead of the entire sequence.
CUDA Toolkit 4.0 Thrust Quick Start Guide PG-05688-040_v01 | 17
zip_iterator
Keep reading, we’ve saved the best iterator for last! The zip_iterator is an extremely
useful gadget: it takes multiple input sequences and yields a sequence of tuples. In this
example we “zip” together a sequence of int and a sequence of char into a sequence of
tuple<int,char> and compute the tuple with the maximum value.
#include < thrust / itera to r / zi p_ ite ra tor .h >
...
// init ia lize v ec tors
t hr u st : : d e vi c e_ v ec to r < int > A (3) ;
th rus t :: d ev ic e_v ec to r < char > B (3) ;
A [0] = 10; A [1] = 20; A [2] = 30;
B [0] = 'x'; B [1] = 'y'; B [ 2] = 'z';
// cre at e it er ato r ( typ e omi tt ed )
fir st = t hr us t :: m ak e_ zi p_i tera tor ( th rust :: mak e_ tup le ( A. begin () , B . begi n () ) );
las t = thr ust :: ma ke _z ip _it er at or ( t hru st :: m ake _t up le ( A . end () , B. end () )) ;
fir st [0] // ret urns tu ple (10 , 'x')
fir st [1] // ret urns tu ple (20 , 'y')
fir st [2] // ret urns tu ple (30 , 'z')
// max im um of [first , last )
t hr u st : : m ax imu m < tu pl e < int ,char > > binar y_ op ;
t hr u st : : tuple < int ,char > init = first [0];
thrus t :: reduc e ( first , last , init , b in ary_op ); // return s tuple (30 , 'z')
What makes zip_iterator so useful is that most algorithms accept either one, or
occasionally two, input sequences. The zip_iterator allows us to combine many
independent sequences into a single sequence of tuples, which can be processed by a broad
set of algorithms.
Refer to the arbitrary_transformation example to see how to implement a ternary
transformation with zip_iterator and for_each. A simple extension of this example
would allow you to compute transformations with multiple output sequences as well.
In addition to convenience, zip_iterator allows us to implement programs more
efficiently. For example, storing 3d points as an array of float3 in CUDA is generally a
bad idea, since array accesses are not properly coalesced. With zip_iterator we can store
the three coordinates in three separate arrays, which does permit coalesced memory access.
In this case, we use zip_iterator to create a virtual array of 3d vectors which we can feed
in to Thrust algorithms. Refer to the dot_products_with_zip example for additional
details.
CUDA Toolkit 4.0 Thrust Quick Start Guide PG-05688-040_v01 | 18
Additional Resources
This guide only scratches the surface of what you can do with Thrust. The following
resources can help you learn to do more with Thrust or provide assistance when problems
arise.
IComprehensive Documentation of Thrust’s API
IA list of Frequently Asked Questions
IMegaNewtons: A blog for Thrust developers
ICollection of example programs
We strongly encourage users to subscribe to the thrust-users mailing list. The mailing list
is a great place to seek out help from the Thrust developers and other Thrust users.
CUDA Toolkit 4.0 Thrust Quick Start Guide PG-05688-040_v01 | 19
www.nvidia.com
Notice
ALL NVIDIA DESIGN SPECIFICATIONS, REFERENCE BOARDS, FILES, DRAWINGS, DIAGNOSTICS, LISTS, AND OTHER
DOCUMENTS (TOGETHER AND SEPARATELY, “MATERIALS”) ARE BEING PROVIDED “AS IS.” NVIDIA MAKES NO
WARRANTIES, EXPRESSED, IMPLIED, STATUTORY, OR OTHERWISE WITH RESPECT TO THE MATERIALS, AND
EXPRESSLY DISCLAIMS ALL IMPLIED WARRANTIES OF NONINFRINGEMENT, MERCHANTABILITY, AND FITNESS FOR
A PARTICULAR PURPOSE.
Information furnished is believed to be accurate and reliable. However, NVIDIA Corporation assumes no
responsibility for the consequences of use of such information or for any infringement of patents or other
rights of third parties that may result from its use. No license is granted by implication of otherwise under
any patent rights of NVIDIA Corporation. Specifications mentioned in this publication are subject to change
without notice. This publication supersedes and replaces all other information previously supplied. NVIDIA
Corporation products are not authorized as critical components in life support devices or systems without
express written approval of NVIDIA Corporation.
Trademarks
NVIDIA and the NVIDIA logo are trademarks and/or registered trademarks of NVIDIA Corporation in the U.S.
and other countries. Other company and product names may be trademarks of the respective companies with
which they are associated.
Copyright
© 2011 NVIDIA Corporation. All rights reserved.

Navigation menu