CUDA C Programming Guide
User Manual: Pdf
Open the PDF directly: View PDF
.
Page Count: 301
| Download | |
| Open PDF In Browser | View PDF |
CUDA C PROGRAMMING GUIDE
PG-02829-001_v9.1 | December 2017
Design Guide
CHANGES FROM VERSION 9.0
‣
‣
‣
‣
‣
Documented restriction that operator-overloads cannot be __global__ functions in
Operator Function.
Removed guidance to break 8-byte shuffles into two 4-byte instructions. 8-byte
shuffle variants are provided since CUDA 9.0. See Warp Shuffle Functions.
Passing __restrict__ references to __global__ functions is now supported.
Updated comment in __global__ functions and function templates.
Documented CUDA_ENABLE_CRC_CHECK in CUDA Environment Variables.
Warp matrix functions [PREVIEW FEATURE] now support matrix products with
m=32, n=8, k=16 and m=8, n=32, k=16 in addition to m=n=k=16.
www.nvidia.com
CUDA C Programming Guide
PG-02829-001_v9.1 | ii
TABLE OF CONTENTS
Chapter 1. Introduction.........................................................................................1
1.1. From Graphics Processing to General Purpose Parallel Computing............................... 1
1.2. CUDA®: A General-Purpose Parallel Computing Platform and Programming Model.............3
1.3. A Scalable Programming Model.........................................................................4
1.4. Document Structure...................................................................................... 6
Chapter 2. Programming Model............................................................................... 8
2.1. Kernels...................................................................................................... 8
2.2. Thread Hierarchy......................................................................................... 9
2.3. Memory Hierarchy....................................................................................... 11
2.4. Heterogeneous Programming.......................................................................... 13
2.5. Compute Capability..................................................................................... 15
Chapter 3. Programming Interface..........................................................................16
3.1. Compilation with NVCC................................................................................ 16
3.1.1. Compilation Workflow............................................................................. 17
3.1.1.1. Offline Compilation.......................................................................... 17
3.1.1.2. Just-in-Time Compilation....................................................................17
3.1.2. Binary Compatibility............................................................................... 17
3.1.3. PTX Compatibility.................................................................................. 18
3.1.4. Application Compatibility.........................................................................18
3.1.5. C/C++ Compatibility............................................................................... 19
3.1.6. 64-Bit Compatibility............................................................................... 19
3.2. CUDA C Runtime.........................................................................................19
3.2.1. Initialization.........................................................................................20
3.2.2. Device Memory..................................................................................... 20
3.2.3. Shared Memory..................................................................................... 24
3.2.4. Page-Locked Host Memory........................................................................29
3.2.4.1. Portable Memory..............................................................................30
3.2.4.2. Write-Combining Memory....................................................................30
3.2.4.3. Mapped Memory...............................................................................30
3.2.5. Asynchronous Concurrent Execution............................................................ 31
3.2.5.1. Concurrent Execution between Host and Device........................................ 32
3.2.5.2. Concurrent Kernel Execution............................................................... 32
3.2.5.3. Overlap of Data Transfer and Kernel Execution......................................... 32
3.2.5.4. Concurrent Data Transfers.................................................................. 33
3.2.5.5. Streams......................................................................................... 33
3.2.5.6. Events...........................................................................................37
3.2.5.7. Synchronous Calls.............................................................................38
3.2.6. Multi-Device System............................................................................... 38
3.2.6.1. Device Enumeration.......................................................................... 38
3.2.6.2. Device Selection.............................................................................. 38
www.nvidia.com
CUDA C Programming Guide
PG-02829-001_v9.1 | iii
3.2.6.3. Stream and Event Behavior................................................................. 39
3.2.6.4. Peer-to-Peer Memory Access................................................................39
3.2.6.5. Peer-to-Peer Memory Copy..................................................................40
3.2.7. Unified Virtual Address Space................................................................... 41
3.2.8. Interprocess Communication..................................................................... 41
3.2.9. Error Checking......................................................................................42
3.2.10. Call Stack.......................................................................................... 42
3.2.11. Texture and Surface Memory................................................................... 42
3.2.11.1. Texture Memory............................................................................. 43
3.2.11.2. Surface Memory............................................................................. 52
3.2.11.3. CUDA Arrays..................................................................................56
3.2.11.4. Read/Write Coherency..................................................................... 56
3.2.12. Graphics Interoperability........................................................................ 56
3.2.12.1. OpenGL Interoperability................................................................... 57
3.2.12.2. Direct3D Interoperability...................................................................59
3.2.12.3. SLI Interoperability..........................................................................65
3.3. Versioning and Compatibility.......................................................................... 66
3.4. Compute Modes..........................................................................................67
3.5. Mode Switches........................................................................................... 68
3.6. Tesla Compute Cluster Mode for Windows.......................................................... 68
Chapter 4. Hardware Implementation......................................................................70
4.1. SIMT Architecture....................................................................................... 70
4.2. Hardware Multithreading...............................................................................72
Chapter 5. Performance Guidelines........................................................................ 74
5.1. Overall Performance Optimization Strategies...................................................... 74
5.2. Maximize Utilization.................................................................................... 74
5.2.1. Application Level...................................................................................74
5.2.2. Device Level........................................................................................ 75
5.2.3. Multiprocessor Level...............................................................................75
5.2.3.1. Occupancy Calculator........................................................................ 77
5.3. Maximize Memory Throughput........................................................................ 79
5.3.1. Data Transfer between Host and Device....................................................... 80
5.3.2. Device Memory Accesses..........................................................................81
5.4. Maximize Instruction Throughput..................................................................... 85
5.4.1. Arithmetic Instructions............................................................................85
5.4.2. Control Flow Instructions......................................................................... 89
5.4.3. Synchronization Instruction.......................................................................90
Appendix A. CUDA-Enabled GPUs........................................................................... 91
Appendix B. C Language Extensions........................................................................ 92
B.1. Function Execution Space Specifiers.................................................................92
B.1.1. __device__.......................................................................................... 92
B.1.2. __global__...........................................................................................92
B.1.3. __host__............................................................................................. 93
www.nvidia.com
CUDA C Programming Guide
PG-02829-001_v9.1 | iv
B.1.4. __noinline__ and __forceinline__............................................................... 93
B.2. Variable Memory Space Specifiers....................................................................93
B.2.1. __device__.......................................................................................... 94
B.2.2. __constant__........................................................................................94
B.2.3. __shared__.......................................................................................... 94
B.2.4. __managed__....................................................................................... 95
B.2.5. __restrict__......................................................................................... 95
B.3. Built-in Vector Types................................................................................... 97
B.3.1. char, short, int, long, longlong, float, double................................................ 97
B.3.2. dim3.................................................................................................. 98
B.4. Built-in Variables........................................................................................ 98
B.4.1. gridDim.............................................................................................. 98
B.4.2. blockIdx..............................................................................................98
B.4.3. blockDim.............................................................................................98
B.4.4. threadIdx............................................................................................ 98
B.4.5. warpSize............................................................................................. 99
B.5. Memory Fence Functions...............................................................................99
B.6. Synchronization Functions............................................................................ 101
B.7. Mathematical Functions...............................................................................103
B.8. Texture Functions...................................................................................... 103
B.8.1. Texture Object API............................................................................... 103
B.8.1.1. tex1Dfetch()..................................................................................103
B.8.1.2. tex1D()........................................................................................ 103
B.8.1.3. tex1DLod()....................................................................................103
B.8.1.4. tex1DGrad().................................................................................. 103
B.8.1.5. tex2D()........................................................................................ 104
B.8.1.6. tex2DLod()....................................................................................104
B.8.1.7. tex2DGrad().................................................................................. 104
B.8.1.8. tex3D()........................................................................................ 104
B.8.1.9. tex3DLod()....................................................................................104
B.8.1.10. tex3DGrad().................................................................................104
B.8.1.11. tex1DLayered()............................................................................. 105
B.8.1.12. tex1DLayeredLod().........................................................................105
B.8.1.13. tex1DLayeredGrad()....................................................................... 105
B.8.1.14. tex2DLayered()............................................................................. 105
B.8.1.15. tex2DLayeredLod().........................................................................105
B.8.1.16. tex2DLayeredGrad()....................................................................... 105
B.8.1.17. texCubemap().............................................................................. 106
B.8.1.18. texCubemapLod().......................................................................... 106
B.8.1.19. texCubemapLayered().....................................................................106
B.8.1.20. texCubemapLayeredLod()................................................................ 106
B.8.1.21. tex2Dgather()...............................................................................106
B.8.2. Texture Reference API........................................................................... 107
www.nvidia.com
CUDA C Programming Guide
PG-02829-001_v9.1 | v
B.8.2.1. tex1Dfetch()..................................................................................107
B.8.2.2. tex1D()........................................................................................ 107
B.8.2.3. tex1DLod()....................................................................................108
B.8.2.4. tex1DGrad().................................................................................. 108
B.8.2.5. tex2D()........................................................................................ 108
B.8.2.6. tex2DLod()....................................................................................108
B.8.2.7. tex2DGrad().................................................................................. 108
B.8.2.8. tex3D()........................................................................................ 109
B.8.2.9. tex3DLod()....................................................................................109
B.8.2.10. tex3DGrad().................................................................................109
B.8.2.11. tex1DLayered()............................................................................. 109
B.8.2.12. tex1DLayeredLod().........................................................................110
B.8.2.13. tex1DLayeredGrad()....................................................................... 110
B.8.2.14. tex2DLayered()............................................................................. 110
B.8.2.15. tex2DLayeredLod().........................................................................110
B.8.2.16. tex2DLayeredGrad()....................................................................... 111
B.8.2.17. texCubemap().............................................................................. 111
B.8.2.18. texCubemapLod().......................................................................... 111
B.8.2.19. texCubemapLayered().....................................................................111
B.8.2.20. texCubemapLayeredLod()................................................................ 111
B.8.2.21. tex2Dgather()...............................................................................112
B.9. Surface Functions...................................................................................... 112
B.9.1. Surface Object API............................................................................... 112
B.9.1.1. surf1Dread().................................................................................. 112
B.9.1.2. surf1Dwrite................................................................................... 112
B.9.1.3. surf2Dread().................................................................................. 113
B.9.1.4. surf2Dwrite()................................................................................. 113
B.9.1.5. surf3Dread().................................................................................. 113
B.9.1.6. surf3Dwrite()................................................................................. 113
B.9.1.7. surf1DLayeredread()........................................................................ 114
B.9.1.8. surf1DLayeredwrite()....................................................................... 114
B.9.1.9. surf2DLayeredread()........................................................................ 114
B.9.1.10. surf2DLayeredwrite()...................................................................... 114
B.9.1.11. surfCubemapread()........................................................................ 115
B.9.1.12. surfCubemapwrite()....................................................................... 115
B.9.1.13. surfCubemapLayeredread()...............................................................115
B.9.1.14. surfCubemapLayeredwrite()..............................................................115
B.9.2. Surface Reference API........................................................................... 116
B.9.2.1. surf1Dread().................................................................................. 116
B.9.2.2. surf1Dwrite................................................................................... 116
B.9.2.3. surf2Dread().................................................................................. 116
B.9.2.4. surf2Dwrite()................................................................................. 116
B.9.2.5. surf3Dread().................................................................................. 117
www.nvidia.com
CUDA C Programming Guide
PG-02829-001_v9.1 | vi
B.9.2.6. surf3Dwrite()................................................................................. 117
B.9.2.7. surf1DLayeredread()........................................................................ 117
B.9.2.8. surf1DLayeredwrite()....................................................................... 117
B.9.2.9. surf2DLayeredread()........................................................................ 118
B.9.2.10. surf2DLayeredwrite()...................................................................... 118
B.9.2.11. surfCubemapread()........................................................................ 118
B.9.2.12. surfCubemapwrite()....................................................................... 118
B.9.2.13. surfCubemapLayeredread()...............................................................119
B.9.2.14. surfCubemapLayeredwrite()..............................................................119
B.10. Read-Only Data Cache Load Function.............................................................119
B.11. Time Function.........................................................................................119
B.12. Atomic Functions..................................................................................... 120
B.12.1. Arithmetic Functions........................................................................... 121
B.12.1.1. atomicAdd().................................................................................121
B.12.1.2. atomicSub()................................................................................. 121
B.12.1.3. atomicExch()................................................................................122
B.12.1.4. atomicMin()................................................................................. 122
B.12.1.5. atomicMax().................................................................................122
B.12.1.6. atomicInc()..................................................................................122
B.12.1.7. atomicDec().................................................................................123
B.12.1.8. atomicCAS().................................................................................123
B.12.2. Bitwise Functions............................................................................... 123
B.12.2.1. atomicAnd().................................................................................123
B.12.2.2. atomicOr().................................................................................. 123
B.12.2.3. atomicXor()................................................................................. 124
B.13. Warp Vote Functions................................................................................. 124
B.14. Warp Match Functions............................................................................... 125
B.14.1. Synopsys.......................................................................................... 125
B.14.2. Description....................................................................................... 125
B.15. Warp Shuffle Functions..............................................................................126
B.15.1. Synopsis........................................................................................... 126
B.15.2. Description....................................................................................... 126
B.15.3. Return Value..................................................................................... 127
B.15.4. Notes.............................................................................................. 128
B.15.5. Examples..........................................................................................128
B.15.5.1. Broadcast of a single value across a warp............................................ 128
B.15.5.2. Inclusive plus-scan across sub-partitions of 8 threads............................... 129
B.15.5.3. Reduction across a warp................................................................. 129
B.16. Warp matrix functions [PREVIEW FEATURE]......................................................129
B.16.1. Description....................................................................................... 130
B.16.2. Example...........................................................................................132
B.17. Profiler Counter Function........................................................................... 132
B.18. Assertion............................................................................................... 133
www.nvidia.com
CUDA C Programming Guide
PG-02829-001_v9.1 | vii
B.19. Formatted Output.................................................................................... 134
B.19.1. Format Specifiers............................................................................... 134
B.19.2. Limitations....................................................................................... 135
B.19.3. Associated Host-Side API.......................................................................136
B.19.4. Examples..........................................................................................136
B.20. Dynamic Global Memory Allocation and Operations............................................ 137
B.20.1. Heap Memory Allocation....................................................................... 138
B.20.2. Interoperability with Host Memory API......................................................138
B.20.3. Examples..........................................................................................138
B.20.3.1. Per Thread Allocation.....................................................................139
B.20.3.2. Per Thread Block Allocation............................................................. 140
B.20.3.3. Allocation Persisting Between Kernel Launches...................................... 141
B.21. Execution Configuration............................................................................. 142
B.22. Launch Bounds........................................................................................ 142
B.23. #pragma unroll........................................................................................145
B.24. SIMD Video Instructions..............................................................................145
Appendix C. Cooperative Groups.......................................................................... 147
C.1. Introduction.............................................................................................147
C.2. Intra-block Groups..................................................................................... 148
C.2.1. Thread Groups and Thread Blocks.............................................................148
C.2.2. Tiled Partitions....................................................................................149
C.2.3. Thread Block Tiles............................................................................... 149
C.2.4. Coalesced Groups................................................................................ 150
C.2.5. Uses of Intra-block Cooperative Groups...................................................... 150
C.2.5.1. Discovery Pattern........................................................................... 150
C.2.5.2. Warp-Synchronous Code Pattern..........................................................151
C.2.5.3. Composition.................................................................................. 152
C.3. Grid Synchronization.................................................................................. 152
C.4. Multi-Device Synchronization........................................................................ 154
Appendix D. CUDA Dynamic Parallelism.................................................................. 156
D.1. Introduction.............................................................................................156
D.1.1. Overview........................................................................................... 156
D.1.2. Glossary............................................................................................ 156
D.2. Execution Environment and Memory Model....................................................... 157
D.2.1. Execution Environment.......................................................................... 157
D.2.1.1. Parent and Child Grids..................................................................... 157
D.2.1.2. Scope of CUDA Primitives................................................................. 158
D.2.1.3. Synchronization..............................................................................158
D.2.1.4. Streams and Events.........................................................................158
D.2.1.5. Ordering and Concurrency.................................................................159
D.2.1.6. Device Management........................................................................ 159
D.2.2. Memory Model.................................................................................... 159
D.2.2.1. Coherence and Consistency............................................................... 160
www.nvidia.com
CUDA C Programming Guide
PG-02829-001_v9.1 | viii
D.3. Programming Interface................................................................................162
D.3.1. CUDA C/C++ Reference..........................................................................162
D.3.1.1. Device-Side Kernel Launch................................................................ 162
D.3.1.2. Streams....................................................................................... 163
D.3.1.3. Events......................................................................................... 164
D.3.1.4. Synchronization..............................................................................164
D.3.1.5. Device Management........................................................................ 164
D.3.1.6. Memory Declarations....................................................................... 165
D.3.1.7. API Errors and Launch Failures........................................................... 166
D.3.1.8. API Reference................................................................................167
D.3.2. Device-side Launch from PTX.................................................................. 168
D.3.2.1. Kernel Launch APIs......................................................................... 168
D.3.2.2. Parameter Buffer Layout.................................................................. 170
D.3.3. Toolkit Support for Dynamic Parallelism......................................................170
D.3.3.1. Including Device Runtime API in CUDA Code........................................... 170
D.3.3.2. Compiling and Linking......................................................................171
D.4. Programming Guidelines.............................................................................. 171
D.4.1. Basics............................................................................................... 171
D.4.2. Performance....................................................................................... 172
D.4.2.1. Synchronization..............................................................................172
D.4.2.2. Dynamic-parallelism-enabled Kernel Overhead........................................ 172
D.4.3. Implementation Restrictions and Limitations................................................ 173
D.4.3.1. Runtime....................................................................................... 173
Appendix E. Mathematical Functions..................................................................... 176
E.1. Standard Functions.................................................................................... 176
E.2. Intrinsic Functions..................................................................................... 184
Appendix F. C/C++ Language Support.................................................................... 187
F.1. C++11 Language Features............................................................................. 187
F.2. C++14 Language Features............................................................................. 190
F.3. Restrictions.............................................................................................. 190
F.3.1. Host Compiler Extensions........................................................................190
F.3.2. Preprocessor Symbols.............................................................................191
F.3.2.1. __CUDA_ARCH__............................................................................. 191
F.3.3. Qualifiers........................................................................................... 192
F.3.3.1. Device Memory Space Specifiers.......................................................... 192
F.3.3.2. __managed__ Memory Space Specifier...................................................193
F.3.3.3. Volatile Qualifier.............................................................................195
F.3.4. Pointers............................................................................................. 196
F.3.5. Operators........................................................................................... 196
F.3.5.1. Assignment Operator........................................................................ 196
F.3.5.2. Address Operator............................................................................ 196
F.3.6. Run Time Type Information (RTTI)............................................................. 196
F.3.7. Exception Handling............................................................................... 196
www.nvidia.com
CUDA C Programming Guide
PG-02829-001_v9.1 | ix
F.3.8. Standard Library...................................................................................196
F.3.9. Functions........................................................................................... 197
F.3.9.1. External Linkage............................................................................. 197
F.3.9.2. Compiler generated functions............................................................. 197
F.3.9.3. Function Parameters........................................................................ 197
F.3.9.4. Static Variables within Function.......................................................... 198
F.3.9.5. Function Pointers............................................................................ 198
F.3.9.6. Function Recursion.......................................................................... 199
F.3.9.7. Friend Functions............................................................................. 199
F.3.9.8. Operator Function........................................................................... 199
F.3.10. Classes............................................................................................. 199
F.3.10.1. Data Members...............................................................................199
F.3.10.2. Function Members..........................................................................199
F.3.10.3. Virtual Functions........................................................................... 199
F.3.10.4. Virtual Base Classes........................................................................199
F.3.10.5. Anonymous Unions......................................................................... 200
F.3.10.6. Windows-Specific........................................................................... 200
F.3.11. Templates......................................................................................... 200
F.3.12. Trigraphs and Digraphs..........................................................................201
F.3.13. Const-qualified variables....................................................................... 201
F.3.14. Deprecation Annotation........................................................................ 202
F.3.15. C++11 Features...................................................................................202
F.3.15.1. Lambda Expressions........................................................................203
F.3.15.2. std::initializer_list..........................................................................204
F.3.15.3. Rvalue references.......................................................................... 204
F.3.15.4. Constexpr functions and function templates.......................................... 204
F.3.15.5. Constexpr variables........................................................................ 205
F.3.15.6. Inline namespaces..........................................................................205
F.3.15.7. thread_local................................................................................. 207
F.3.15.8. __global__ functions and function templates......................................... 207
F.3.15.9. __device__/__constant__/__shared__ variables...................................... 209
F.3.15.10. Defaulted functions.......................................................................209
F.3.16. C++14 Features...................................................................................209
F.3.16.1. Functions with deduced return type.................................................... 209
F.3.16.2. Variable templates......................................................................... 210
F.3.16.3. [[deprecated]] attribute.................................................................. 211
F.4. Polymorphic Function Wrappers..................................................................... 211
F.5. Experimental Feature: Extended Lambdas.........................................................214
F.5.1. Extended Lambda Type Traits...................................................................216
F.5.2. Extended Lambda Restrictions.................................................................. 217
F.5.3. Notes on __host__ __device__ lambdas.......................................................225
F.5.4. *this Capture By Value........................................................................... 226
F.5.5. Additional Notes...................................................................................228
www.nvidia.com
CUDA C Programming Guide
PG-02829-001_v9.1 | x
F.6. Code Samples........................................................................................... 230
F.6.1. Data Aggregation Class...........................................................................230
F.6.2. Derived Class...................................................................................... 230
F.6.3. Class Template.....................................................................................231
F.6.4. Function Template................................................................................ 231
F.6.5. Functor Class...................................................................................... 232
Appendix G. Texture Fetching..............................................................................233
G.1. Nearest-Point Sampling............................................................................... 233
G.2. Linear Filtering........................................................................................ 234
G.3. Table Lookup........................................................................................... 235
Appendix H. Compute Capabilities........................................................................ 237
H.1. Features and Technical Specifications............................................................. 237
H.2. Floating-Point Standard...............................................................................241
H.3. Compute Capability 3.x.............................................................................. 242
H.3.1. Architecture....................................................................................... 242
H.3.2. Global Memory....................................................................................243
H.3.3. Shared Memory................................................................................... 245
H.4. Compute Capability 5.x.............................................................................. 246
H.4.1. Architecture....................................................................................... 246
H.4.2. Global Memory....................................................................................247
H.4.3. Shared Memory................................................................................... 247
H.5. Compute Capability 6.x.............................................................................. 251
H.5.1. Architecture....................................................................................... 251
H.5.2. Global Memory....................................................................................251
H.5.3. Shared Memory................................................................................... 251
H.6. Compute Capability 7.x.............................................................................. 252
H.6.1. Architecture....................................................................................... 252
H.6.2. Independent Thread Scheduling............................................................... 252
H.6.3. Global Memory....................................................................................254
H.6.4. Shared Memory................................................................................... 255
Appendix I. Driver API....................................................................................... 256
I.1. Context................................................................................................... 259
I.2. Module.................................................................................................... 260
I.3. Kernel Execution........................................................................................261
I.4. Interoperability between Runtime and Driver APIs............................................... 263
Appendix J. CUDA Environment Variables............................................................... 264
Appendix K. Unified Memory Programming..............................................................267
K.1. Unified Memory Introduction........................................................................ 267
K.1.1. System Requirements............................................................................ 268
K.1.2. Simplifying GPU Programming.................................................................. 268
K.1.3. Data Migration and Coherency................................................................. 270
K.1.4. GPU Memory Oversubscription................................................................. 270
K.1.5. Multi-GPU Support................................................................................271
www.nvidia.com
CUDA C Programming Guide
PG-02829-001_v9.1 | xi
K.2. Programming Model....................................................................................271
K.2.1. Managed Memory Opt In........................................................................ 271
K.2.1.1. Explicit Allocation Using cudaMallocManaged()........................................ 272
K.2.1.2. Global-Scope Managed Variables Using __managed__.................................273
K.2.2. Coherency and Concurrency.................................................................... 273
K.2.2.1. GPU Exclusive Access To Managed Memory............................................. 273
K.2.2.2. Explicit Synchronization and Logical GPU Activity.....................................274
K.2.2.3. Managing Data Visibility and Concurrent CPU + GPU Access with Streams......... 275
K.2.2.4. Stream Association Examples............................................................. 276
K.2.2.5. Stream Attach With Multithreaded Host Programs.................................... 277
K.2.2.6. Advanced Topic: Modular Programs and Data Access Constraints................... 278
K.2.2.7. Memcpy()/Memset() Behavior With Managed Memory................................ 279
K.2.3. Language Integration............................................................................ 279
K.2.3.1. Host Program Errors with __managed__ Variables.....................................280
K.2.4. Querying Unified Memory Support.............................................................281
K.2.4.1. Device Properties........................................................................... 281
K.2.4.2. Pointer Attributes........................................................................... 281
K.2.5. Advanced Topics.................................................................................. 281
K.2.5.1. Managed Memory with Multi-GPU Programs on pre-6.x Architectures.............. 281
K.2.5.2. Using fork() with Managed Memory...................................................... 282
K.3. Performance Tuning................................................................................... 282
K.3.1. Data Prefetching..................................................................................283
K.3.2. Data Usage Hints................................................................................. 284
K.3.3. Querying Usage Attributes...................................................................... 285
www.nvidia.com
CUDA C Programming Guide
PG-02829-001_v9.1 | xii
LIST OF FIGURES
Figure 1 Floating-Point Operations per Second for the CPU and GPU ...................................1
Figure 2 Memory Bandwidth for the CPU and GPU .........................................................2
Figure 3 The GPU Devotes More Transistors to Data Processing ......................................... 2
Figure 4 GPU Computing Applications ........................................................................ 4
Figure 5 Automatic Scalability ................................................................................. 6
Figure 6 Grid of Thread Blocks ...............................................................................10
Figure 7 Memory Hierarchy ................................................................................... 12
Figure 8 Heterogeneous Programming ...................................................................... 14
Figure 9 Matrix Multiplication without Shared Memory .................................................. 26
Figure 10 Matrix Multiplication with Shared Memory .....................................................29
Figure 11 The Driver API Is Backward but Not Forward Compatible ................................... 67
Figure 12 Parent-Child Launch Nesting .................................................................... 158
Figure 13 Nearest-Point Sampling Filtering Mode ........................................................234
Figure 14 Linear Filtering Mode ............................................................................ 235
Figure 15 One-Dimensional Table Lookup Using Linear Filtering ...................................... 236
Figure 16 Examples of Global Memory Accesses ......................................................... 245
Figure 17 Strided Shared Memory Accesses ...............................................................249
Figure 18 Irregular Shared Memory Accesses ............................................................. 250
Figure 19 Library Context Management ................................................................... 260
www.nvidia.com
CUDA C Programming Guide
PG-02829-001_v9.1 | xiii
LIST OF TABLES
Table 1 Cubemap Fetch ........................................................................................51
Table 2 Throughput of Native Arithmetic Instructions ................................................... 85
Table 3 Alignment Requirements .............................................................................97
Table 4 New Device-only Launch Implementation Functions .......................................... 167
Table 5 Supported API Functions ........................................................................... 167
Table 6 Single-Precision Mathematical Standard Library Functions with Maximum ULP Error .... 176
Table 7 Double-Precision Mathematical Standard Library Functions with Maximum ULP Error... 180
Table 8 Functions Affected by -use_fast_math .......................................................... 184
Table 9 Single-Precision Floating-Point Intrinsic Functions ............................................. 185
Table 10 Double-Precision Floating-Point Intrinsic Functions .......................................... 186
Table 11 C++11 Language Features ........................................................................ 187
Table 12 C++14 Language Features ........................................................................ 190
Table 13 Feature Support per Compute Capability ......................................................237
Table 14 Technical Specifications per Compute Capability ............................................ 238
Table 15 Objects Available in the CUDA Driver API ..................................................... 256
Table 16 CUDA Environment Variables ..................................................................... 264
www.nvidia.com
CUDA C Programming Guide
PG-02829-001_v9.1 | xiv
Chapter 1.
INTRODUCTION
1.1. From Graphics Processing to General Purpose
Parallel Computing
Driven by the insatiable market demand for realtime, high-definition 3D graphics,
the programmable Graphic Processor Unit or GPU has evolved into a highly parallel,
multithreaded, manycore processor with tremendous computational horsepower and
very high memory bandwidth, as illustrated by Figure 1 and Figure 2.
Figure 1 Floating-Point Operations per Second for the CPU and GPU
www.nvidia.com
CUDA C Programming Guide
PG-02829-001_v9.1 | 1
Introduction
Figure 2 Memory Bandwidth for the CPU and GPU
The reason behind the discrepancy in floating-point capability between the CPU and the
GPU is that the GPU is specialized for compute-intensive, highly parallel computation
- exactly what graphics rendering is about - and therefore designed such that more
transistors are devoted to data processing rather than data caching and flow control, as
schematically illustrated by Figure 3.
Cont r ol
ALU
ALU
ALU
ALU
Cache
DRAM
DRAM
CPU
GPU
Figure 3 The GPU Devotes More Transistors to Data Processing
More specifically, the GPU is especially well-suited to address problems that can be
expressed as data-parallel computations - the same program is executed on many data
elements in parallel - with high arithmetic intensity - the ratio of arithmetic operations
to memory operations. Because the same program is executed for each data element,
www.nvidia.com
CUDA C Programming Guide
PG-02829-001_v9.1 | 2
Introduction
there is a lower requirement for sophisticated flow control, and because it is executed on
many data elements and has high arithmetic intensity, the memory access latency can be
hidden with calculations instead of big data caches.
Data-parallel processing maps data elements to parallel processing threads. Many
applications that process large data sets can use a data-parallel programming model
to speed up the computations. In 3D rendering, large sets of pixels and vertices are
mapped to parallel threads. Similarly, image and media processing applications such as
post-processing of rendered images, video encoding and decoding, image scaling, stereo
vision, and pattern recognition can map image blocks and pixels to parallel processing
threads. In fact, many algorithms outside the field of image rendering and processing
are accelerated by data-parallel processing, from general signal processing or physics
simulation to computational finance or computational biology.
1.2. CUDA®: A General-Purpose Parallel Computing
Platform and Programming Model
In November 2006, NVIDIA introduced CUDA®, a general purpose parallel computing
platform and programming model that leverages the parallel compute engine in
NVIDIA GPUs to solve many complex computational problems in a more efficient way
than on a CPU.
CUDA comes with a software environment that allows developers to use C as a highlevel programming language. As illustrated by Figure 4, other languages, application
programming interfaces, or directives-based approaches are supported, such as
FORTRAN, DirectCompute, OpenACC.
www.nvidia.com
CUDA C Programming Guide
PG-02829-001_v9.1 | 3
Introduction
Figure 4 GPU Computing Applications
CUDA is designed to support various languages and application programming interfaces.
1.3. A Scalable Programming Model
The advent of multicore CPUs and manycore GPUs means that mainstream processor
chips are now parallel systems. Furthermore, their parallelism continues to scale
with Moore's law. The challenge is to develop application software that transparently
scales its parallelism to leverage the increasing number of processor cores, much as
3D graphics applications transparently scale their parallelism to manycore GPUs with
widely varying numbers of cores.
The CUDA parallel programming model is designed to overcome this challenge while
maintaining a low learning curve for programmers familiar with standard programming
languages such as C.
At its core are three key abstractions - a hierarchy of thread groups, shared memories,
and barrier synchronization - that are simply exposed to the programmer as a minimal
set of language extensions.
www.nvidia.com
CUDA C Programming Guide
PG-02829-001_v9.1 | 4
Introduction
These abstractions provide fine-grained data parallelism and thread parallelism,
nested within coarse-grained data parallelism and task parallelism. They guide the
programmer to partition the problem into coarse sub-problems that can be solved
independently in parallel by blocks of threads, and each sub-problem into finer pieces
that can be solved cooperatively in parallel by all threads within the block.
This decomposition preserves language expressivity by allowing threads to cooperate
when solving each sub-problem, and at the same time enables automatic scalability.
Indeed, each block of threads can be scheduled on any of the available multiprocessors
within a GPU, in any order, concurrently or sequentially, so that a compiled CUDA
program can execute on any number of multiprocessors as illustrated by Figure 5, and
only the runtime system needs to know the physical multiprocessor count.
This scalable programming model allows the GPU architecture to span a wide market
range by simply scaling the number of multiprocessors and memory partitions: from
the high-performance enthusiast GeForce GPUs and professional Quadro and Tesla
computing products to a variety of inexpensive, mainstream GeForce GPUs (see CUDAEnabled GPUs for a list of all CUDA-enabled GPUs).
www.nvidia.com
CUDA C Programming Guide
PG-02829-001_v9.1 | 5
Introduction
Mu lt it hreaded CUDA Pr ogr am
Block 0
Block 1
Block 2
Block 3
Block 4
Block 5
Block 5
Block 6
Block 6
Block 7
GPU w it h 2 SMs
GPU w it h 4 SMs
SM 0
SM 1
SM 0
SM 1
SM 2
SM 3
Block 0
Block 1
Block 0
Block 1
Block 2
Block 3
Block 2
Block 3
Block 4
Block 5
Block 6
Block 7
Block 4
Block 5
Block 6
Block 7
A GPU is built around an array of Streaming Multiprocessors (SMs) (see Hardware
Implementation for more details). A multithreaded program is partitioned into blocks
of threads that execute independently from each other, so that a GPU with more
multiprocessors will automatically execute the program in less time than a GPU with
fewer multiprocessors.
Figure 5 Automatic Scalability
1.4. Document Structure
This document is organized into the following chapters:
‣
‣
‣
‣
‣
‣
‣
Chapter Introduction is a general introduction to CUDA.
Chapter Programming Model outlines the CUDA programming model.
Chapter Programming Interface describes the programming interface.
Chapter Hardware Implementation describes the hardware implementation.
Chapter Performance Guidelines gives some guidance on how to achieve maximum
performance.
Appendix CUDA-Enabled GPUs lists all CUDA-enabled devices.
Appendix C Language Extensions is a detailed description of all extensions to the C
language.
www.nvidia.com
CUDA C Programming Guide
PG-02829-001_v9.1 | 6
Introduction
‣
‣
‣
‣
‣
‣
‣
‣
‣
Appendix Cooperative Groups describes synchronization primitives for various
groups of CUDA threads.
Appendix CUDA Dynamic Parallelism describes how to launch and synchronize
one kernel from another.
Appendix Mathematical Functions lists the mathematical functions supported in
CUDA.
Appendix C/C++ Language Support lists the C++ features supported in device code.
Appendix Texture Fetching gives more details on texture fetching
Appendix Compute Capabilities gives the technical specifications of various devices,
as well as more architectural details.
Appendix Driver API introduces the low-level driver API.
Appendix CUDA Environment Variables lists all the CUDA environment variables.
Appendix Unified Memory Programming introduces the Unified Memory
programming model.
www.nvidia.com
CUDA C Programming Guide
PG-02829-001_v9.1 | 7
Chapter 2.
PROGRAMMING MODEL
This chapter introduces the main concepts behind the CUDA programming model by
outlining how they are exposed in C. An extensive description of CUDA C is given in
Programming Interface.
Full code for the vector addition example used in this chapter and the next can be found
in the vectorAdd CUDA sample.
2.1. Kernels
CUDA C extends C by allowing the programmer to define C functions, called kernels,
that, when called, are executed N times in parallel by N different CUDA threads, as
opposed to only once like regular C functions.
A kernel is defined using the __global__ declaration specifier and the number of
CUDA threads that execute that kernel for a given kernel call is specified using a new
<<<...>>> execution configuration syntax (see C Language Extensions). Each thread
that executes the kernel is given a unique thread ID that is accessible within the kernel
through the built-in threadIdx variable.
As an illustration, the following sample code adds two vectors A and B of size N and
stores the result into vector C:
// Kernel definition
__global__ void VecAdd(float* A, float* B, float* C)
{
int i = threadIdx.x;
C[i] = A[i] + B[i];
}
int main()
{
...
// Kernel invocation with N threads
VecAdd<<<1, N>>>(A, B, C);
...
}
Here, each of the N threads that execute VecAdd() performs one pair-wise addition.
www.nvidia.com
CUDA C Programming Guide
PG-02829-001_v9.1 | 8
Programming Model
2.2. Thread Hierarchy
For convenience, threadIdx is a 3-component vector, so that threads can be identified
using a one-dimensional, two-dimensional, or three-dimensional thread index, forming
a one-dimensional, two-dimensional, or three-dimensional block of threads, called a
thread block. This provides a natural way to invoke computation across the elements in a
domain such as a vector, matrix, or volume.
The index of a thread and its thread ID relate to each other in a straightforward way:
For a one-dimensional block, they are the same; for a two-dimensional block of size (Dx,
Dy),the thread ID of a thread of index (x, y) is (x + y Dx); for a three-dimensional block of
size (Dx, Dy, Dz), the thread ID of a thread of index (x, y, z) is (x + y Dx + z Dx Dy).
As an example, the following code adds two matrices A and B of size NxN and stores the
result into matrix C:
// Kernel definition
__global__ void MatAdd(float A[N][N], float B[N][N],
float C[N][N])
{
int i = threadIdx.x;
int j = threadIdx.y;
C[i][j] = A[i][j] + B[i][j];
}
int main()
{
...
// Kernel invocation with one block of N * N * 1 threads
int numBlocks = 1;
dim3 threadsPerBlock(N, N);
MatAdd<<>>(A, B, C);
...
}
There is a limit to the number of threads per block, since all threads of a block are
expected to reside on the same processor core and must share the limited memory
resources of that core. On current GPUs, a thread block may contain up to 1024 threads.
However, a kernel can be executed by multiple equally-shaped thread blocks, so that the
total number of threads is equal to the number of threads per block times the number of
blocks.
Blocks are organized into a one-dimensional, two-dimensional, or three-dimensional
grid of thread blocks as illustrated by Figure 6. The number of thread blocks in a grid is
usually dictated by the size of the data being processed or the number of processors in
the system, which it can greatly exceed.
www.nvidia.com
CUDA C Programming Guide
PG-02829-001_v9.1 | 9
Programming Model
Gr id
Block ( 0, 0)
Block ( 1, 0)
Block ( 2, 0)
Block ( 0, 1)
Block ( 1, 1)
Block ( 2, 1)
Block (1, 1)
Thr ead ( 0, 0) Thr ead ( 1, 0)
Thr ead ( 2, 0) Thr ead ( 3, 0)
Thr ead ( 0, 1) Thr ead ( 1, 1)
Thr ead ( 2, 1) Thr ead ( 3, 1)
Thr ead ( 0, 2)
Thr ead ( 1, 2) Thr ead ( 2, 2) Thr ead ( 3, 2)
Figure 6 Grid of Thread Blocks
The number of threads per block and the number of blocks per grid specified in the
<<<...>>> syntax can be of type int or dim3. Two-dimensional blocks or grids can be
specified as in the example above.
Each block within the grid can be identified by a one-dimensional, two-dimensional,
or three-dimensional index accessible within the kernel through the built-in blockIdx
variable. The dimension of the thread block is accessible within the kernel through the
built-in blockDim variable.
www.nvidia.com
CUDA C Programming Guide
PG-02829-001_v9.1 | 10
Programming Model
Extending the previous MatAdd() example to handle multiple blocks, the code becomes
as follows.
// Kernel definition
__global__ void MatAdd(float A[N][N], float B[N][N],
float C[N][N])
{
int i = blockIdx.x * blockDim.x + threadIdx.x;
int j = blockIdx.y * blockDim.y + threadIdx.y;
if (i < N && j < N)
C[i][j] = A[i][j] + B[i][j];
}
int main()
{
...
// Kernel invocation
dim3 threadsPerBlock(16, 16);
dim3 numBlocks(N / threadsPerBlock.x, N / threadsPerBlock.y);
MatAdd<<>>(A, B, C);
...
}
A thread block size of 16x16 (256 threads), although arbitrary in this case, is a common
choice. The grid is created with enough blocks to have one thread per matrix element
as before. For simplicity, this example assumes that the number of threads per grid in
each dimension is evenly divisible by the number of threads per block in that dimension,
although that need not be the case.
Thread blocks are required to execute independently: It must be possible to execute
them in any order, in parallel or in series. This independence requirement allows thread
blocks to be scheduled in any order across any number of cores as illustrated by Figure
5, enabling programmers to write code that scales with the number of cores.
Threads within a block can cooperate by sharing data through some shared memory and
by synchronizing their execution to coordinate memory accesses. More precisely, one
can specify synchronization points in the kernel by calling the __syncthreads()
intrinsic function; __syncthreads() acts as a barrier at which all threads in the
block must wait before any is allowed to proceed. Shared Memory gives an example of
using shared memory. In addition to __syncthreads(), the Cooperative Groups API
provides a rich set of thread-synchronization primitives.
For efficient cooperation, the shared memory is expected to be a low-latency memory
near each processor core (much like an L1 cache) and __syncthreads() is expected to
be lightweight.
2.3. Memory Hierarchy
CUDA threads may access data from multiple memory spaces during their execution
as illustrated by Figure 7. Each thread has private local memory. Each thread block has
shared memory visible to all threads of the block and with the same lifetime as the block.
All threads have access to the same global memory.
There are also two additional read-only memory spaces accessible by all threads: the
constant and texture memory spaces. The global, constant, and texture memory spaces
are optimized for different memory usages (see Device Memory Accesses). Texture
www.nvidia.com
CUDA C Programming Guide
PG-02829-001_v9.1 | 11
Programming Model
memory also offers different addressing modes, as well as data filtering, for some
specific data formats (see Texture and Surface Memory).
The global, constant, and texture memory spaces are persistent across kernel launches
by the same application.
Thr ead
Per -t hread local
m em or y
Thread Block
Per -block shared
m em or y
Gr id 0
Block ( 0, 0)
Block ( 1, 0)
Block ( 2, 0)
Block ( 0, 1)
Block ( 1, 1)
Block ( 2, 1)
Gr id 1
Global m em or y
Block ( 0, 0)
Block ( 1, 0)
Block ( 0, 1)
Block ( 1, 1)
Block ( 0, 2)
Block ( 1, 2)
Figure 7 Memory Hierarchy
www.nvidia.com
CUDA C Programming Guide
PG-02829-001_v9.1 | 12
Programming Model
2.4. Heterogeneous Programming
As illustrated by Figure 8, the CUDA programming model assumes that the CUDA
threads execute on a physically separate device that operates as a coprocessor to the host
running the C program. This is the case, for example, when the kernels execute on a
GPU and the rest of the C program executes on a CPU.
The CUDA programming model also assumes that both the host and the device
maintain their own separate memory spaces in DRAM, referred to as host memory and
device memory, respectively. Therefore, a program manages the global, constant, and
texture memory spaces visible to kernels through calls to the CUDA runtime (described
in Programming Interface). This includes device memory allocation and deallocation as
well as data transfer between host and device memory.
Unified Memory provides managed memory to bridge the host and device memory
spaces. Managed memory is accessible from all CPUs and GPUs in the system as a
single, coherent memory image with a common address space. This capability enables
oversubscription of device memory and can greatly simplify the task of porting
applications by eliminating the need to explicitly mirror data on host and device. See
Unified Memory Programming for an introduction to Unified Memory.”
www.nvidia.com
CUDA C Programming Guide
PG-02829-001_v9.1 | 13
Programming Model
C Program
Sequential
Execution
Serial code
Host
Parallel kernel
Devi ce
Kernel0 < < < > > > ()
Serial code
Parallel kernel
Kernel1 < < < > > > ()
Gr id 0
Block ( 0, 0)
Block ( 1, 0)
Block ( 2, 0)
Block ( 0, 1)
Block ( 1, 1)
Block ( 2, 1)
Host
Devi ce
Gr id 1
Block ( 0, 0)
Block ( 1, 0)
Block ( 0, 1)
Block ( 1, 1)
Block ( 0, 2)
Block ( 1, 2)
Serial code executes on the host while parallel code executes on the device.
Figure 8 Heterogeneous Programming
www.nvidia.com
CUDA C Programming Guide
PG-02829-001_v9.1 | 14
Programming Model
2.5. Compute Capability
The compute capability of a device is represented by a version number, also sometimes
called its "SM version". This version number identifies the features supported by the
GPU hardware and is used by applications at runtime to determine which hardware
features and/or instructions are available on the present GPU.
The compute capability comprises a major revision number X and a minor revision
number Y and is denoted by X.Y.
Devices with the same major revision number are of the same core architecture. The
major revision number is 7 for devices based on the Volta architecture, 6 for devices
based on the Pascal architecture, 5 for devices based on the Maxwell architecture, 3 for
devices based on the Kepler architecture, 2 for devices based on the Fermi architecture,
and 1 for devices based on the Tesla architecture.
The minor revision number corresponds to an incremental improvement to the core
architecture, possibly including new features.
CUDA-Enabled GPUs lists of all CUDA-enabled devices along with their compute
capability. Compute Capabilities gives the technical specifications of each compute
capability.
The compute capability version of a particular GPU should not be confused with the
CUDA version (e.g., CUDA 7.5, CUDA 8, CUDA 9), which is the version of the CUDA
software platform. The CUDA platform is used by application developers to create
applications that run on many generations of GPU architectures, including future
GPU architectures yet to be invented. While new versions of the CUDA platform often
add native support for a new GPU architecture by supporting the compute capability
version of that architecture, new versions of the CUDA platform typically also include
software features that are independent of hardware generation.
The Tesla and Fermi architectures are no longer supported starting with CUDA 7.0 and
CUDA 9.0, respectively.
www.nvidia.com
CUDA C Programming Guide
PG-02829-001_v9.1 | 15
Chapter 3.
PROGRAMMING INTERFACE
CUDA C provides a simple path for users familiar with the C programming language to
easily write programs for execution by the device.
It consists of a minimal set of extensions to the C language and a runtime library.
The core language extensions have been introduced in Programming Model. They allow
programmers to define a kernel as a C function and use some new syntax to specify the
grid and block dimension each time the function is called. A complete description of all
extensions can be found in C Language Extensions. Any source file that contains some of
these extensions must be compiled with nvcc as outlined in Compilation with NVCC.
The runtime is introduced in Compilation Workflow. It provides C functions that
execute on the host to allocate and deallocate device memory, transfer data between host
memory and device memory, manage systems with multiple devices, etc. A complete
description of the runtime can be found in the CUDA reference manual.
The runtime is built on top of a lower-level C API, the CUDA driver API, which is
also accessible by the application. The driver API provides an additional level of
control by exposing lower-level concepts such as CUDA contexts - the analogue of host
processes for the device - and CUDA modules - the analogue of dynamically loaded
libraries for the device. Most applications do not use the driver API as they do not
need this additional level of control and when using the runtime, context and module
management are implicit, resulting in more concise code. The driver API is introduced
in Driver API and fully described in the reference manual.
3.1. Compilation with NVCC
Kernels can be written using the CUDA instruction set architecture, called PTX, which
is described in the PTX reference manual. It is however usually more effective to use a
high-level programming language such as C. In both cases, kernels must be compiled
into binary code by nvcc to execute on the device.
nvcc is a compiler driver that simplifies the process of compiling C or PTX code: It
provides simple and familiar command line options and executes them by invoking the
collection of tools that implement the different compilation stages. This section gives
www.nvidia.com
CUDA C Programming Guide
PG-02829-001_v9.1 | 16
Programming Interface
an overview of nvcc workflow and command options. A complete description can be
found in the nvcc user manual.
3.1.1. Compilation Workflow
3.1.1.1. Offline Compilation
Source files compiled with nvcc can include a mix of host code (i.e., code that executes
on the host) and device code (i.e., code that executes on the device). nvcc's basic
workflow consists in separating device code from host code and then:
‣
‣
compiling the device code into an assembly form (PTX code) and/or binary form
(cubin object),
and modifying the host code by replacing the <<<...>>> syntax introduced in
Kernels (and described in more details in Execution Configuration) by the necessary
CUDA C runtime function calls to load and launch each compiled kernel from the
PTX code and/or cubin object.
The modified host code is output either as C code that is left to be compiled using
another tool or as object code directly by letting nvcc invoke the host compiler during
the last compilation stage.
Applications can then:
‣
‣
Either link to the compiled host code (this is the most common case),
Or ignore the modified host code (if any) and use the CUDA driver API (see Driver
API) to load and execute the PTX code or cubin object.
3.1.1.2. Just-in-Time Compilation
Any PTX code loaded by an application at runtime is compiled further to binary code
by the device driver. This is called just-in-time compilation. Just-in-time compilation
increases application load time, but allows the application to benefit from any new
compiler improvements coming with each new device driver. It is also the only way
for applications to run on devices that did not exist at the time the application was
compiled, as detailed in Application Compatibility.
When the device driver just-in-time compiles some PTX code for some application, it
automatically caches a copy of the generated binary code in order to avoid repeating
the compilation in subsequent invocations of the application. The cache - referred to as
compute cache - is automatically invalidated when the device driver is upgraded, so that
applications can benefit from the improvements in the new just-in-time compiler built
into the device driver.
Environment variables are available to control just-in-time compilation as described in
CUDA Environment Variables
3.1.2. Binary Compatibility
Binary code is architecture-specific. A cubin object is generated using the compiler
option -code that specifies the targeted architecture: For example, compiling with
-code=sm_35 produces binary code for devices of compute capability 3.5. Binary
www.nvidia.com
CUDA C Programming Guide
PG-02829-001_v9.1 | 17
Programming Interface
compatibility is guaranteed from one minor revision to the next one, but not from one
minor revision to the previous one or across major revisions. In other words, a cubin
object generated for compute capability X.y will only execute on devices of compute
capability X.z where z≥y.
3.1.3. PTX Compatibility
Some PTX instructions are only supported on devices of higher compute capabilities.
For example, Warp Shuffle Functions are only supported on devices of compute
capability 3.0 and above. The -arch compiler option specifies the compute capability
that is assumed when compiling C to PTX code. So, code that contains warp shuffle, for
example, must be compiled with -arch=compute_30 (or higher).
PTX code produced for some specific compute capability can always be compiled to
binary code of greater or equal compute capability. Note that a binary compiled from an
earlier PTX version may not make use of some hardware features. For example, a binary
targeting devices of compute capability 7.0 (Volta) compiled from PTX generated for
compute capability 6.0 (Pascal) will not make use of Tensor Core instructions, since these
were not available on Pascal. As a result, the final binary may perform worse than would
be possible if the binary were generated using the latest version of PTX.
3.1.4. Application Compatibility
To execute code on devices of specific compute capability, an application must load
binary or PTX code that is compatible with this compute capability as described in
Binary Compatibility and PTX Compatibility. In particular, to be able to execute code
on future architectures with higher compute capability (for which no binary code can be
generated yet), an application must load PTX code that will be just-in-time compiled for
these devices (see Just-in-Time Compilation).
Which PTX and binary code gets embedded in a CUDA C application is controlled by
the -arch and -code compiler options or the -gencode compiler option as detailed in
the nvcc user manual. For example,
nvcc x.cu
-gencode arch=compute_35,code=sm_35
-gencode arch=compute_50,code=sm_50
-gencode arch=compute_60,code=\'compute_60,sm_60\'
embeds binary code compatible with compute capability 3.5 and 5.0 (first and second
-gencode options) and PTX and binary code compatible with compute capability 6.0
(third -gencode option).
Host code is generated to automatically select at runtime the most appropriate code to
load and execute, which, in the above example, will be:
‣
‣
‣
‣
3.5 binary code for devices with compute capability 3.5 and 3.7,
5.0 binary code for devices with compute capability 5.0 and 5.2,
6.0 binary code for devices with compute capability 6.0 and 6.1,
PTX code which is compiled to binary code at runtime for devices with compute
capability 7.0 and higher.
www.nvidia.com
CUDA C Programming Guide
PG-02829-001_v9.1 | 18
Programming Interface
x.cu can have an optimized code path that uses warp shuffle operations, for example,
which are only supported in devices of compute capability 3.0 and higher. The
__CUDA_ARCH__ macro can be used to differentiate various code paths based on
compute capability. It is only defined for device code. When compiling with arch=compute_35 for example, __CUDA_ARCH__ is equal to 350.
Applications using the driver API must compile code to separate files and explicitly load
and execute the most appropriate file at runtime.
The Volta architecture introduces Independent Thread Scheduling which changes the
way threads are scheduled on the GPU. For code relying on specific behavior of SIMT
scheduling in previous architecures, Independent Thread Scheduling may alter the set of
participating threads, leading to incorrect results. To aid migration while implementing
the corrective actions detailed in Independent Thread Scheduling, Volta developers
can opt-in to Pascal's thread scheduling with the compiler option combination arch=compute_60 -code=sm_70.
The nvcc user manual lists various shorthand for the -arch, -code, and -gencode
compiler options. For example, -arch=sm_35 is a shorthand for -arch=compute_35 code=compute_35,sm_35 (which is the same as -gencode arch=compute_35,code=
\'compute_35,sm_35\').
3.1.5. C/C++ Compatibility
The front end of the compiler processes CUDA source files according to C++ syntax
rules. Full C++ is supported for the host code. However, only a subset of C++ is fully
supported for the device code as described in C/C++ Language Support.
3.1.6. 64-Bit Compatibility
The 64-bit version of nvcc compiles device code in 64-bit mode (i.e., pointers are 64-bit).
Device code compiled in 64-bit mode is only supported with host code compiled in 64bit mode.
Similarly, the 32-bit version of nvcc compiles device code in 32-bit mode and device
code compiled in 32-bit mode is only supported with host code compiled in 32-bit mode.
The 32-bit version of nvcc can compile device code in 64-bit mode also using the -m64
compiler option.
The 64-bit version of nvcc can compile device code in 32-bit mode also using the -m32
compiler option.
3.2. CUDA C Runtime
The runtime is implemented in the cudart library, which is linked to the application,
either statically via cudart.lib or libcudart.a, or dynamically via cudart.dll or
libcudart.so. Applications that require cudart.dll and/or cudart.so for dynamic
linking typically include them as part of the application installation package.
All its entry points are prefixed with cuda.
www.nvidia.com
CUDA C Programming Guide
PG-02829-001_v9.1 | 19
Programming Interface
As mentioned in Heterogeneous Programming, the CUDA programming model
assumes a system composed of a host and a device, each with their own separate
memory. Device Memory gives an overview of the runtime functions used to manage
device memory.
Shared Memory illustrates the use of shared memory, introduced in Thread Hierarchy,
to maximize performance.
Page-Locked Host Memory introduces page-locked host memory that is required to
overlap kernel execution with data transfers between host and device memory.
Asynchronous Concurrent Execution describes the concepts and API used to enable
asynchronous concurrent execution at various levels in the system.
Multi-Device System shows how the programming model extends to a system with
multiple devices attached to the same host.
Error Checking describes how to properly check the errors generated by the runtime.
Call Stack mentions the runtime functions used to manage the CUDA C call stack.
Texture and Surface Memory presents the texture and surface memory spaces that
provide another way to access device memory; they also expose a subset of the GPU
texturing hardware.
Graphics Interoperability introduces the various functions the runtime provides to
interoperate with the two main graphics APIs, OpenGL and Direct3D.
3.2.1. Initialization
There is no explicit initialization function for the runtime; it initializes the first time a
runtime function is called (more specifically any function other than functions from the
device and version management sections of the reference manual). One needs to keep
this in mind when timing runtime function calls and when interpreting the error code
from the first call into the runtime.
During initialization, the runtime creates a CUDA context for each device in the system
(see Context for more details on CUDA contexts). This context is the primary context for
this device and it is shared among all the host threads of the application. As part of this
context creation, the device code is just-in-time compiled if necessary (see Just-in-Time
Compilation) and loaded into device memory. This all happens under the hood and the
runtime does not expose the primary context to the application.
When a host thread calls cudaDeviceReset(), this destroys the primary context of the
device the host thread currently operates on (i.e., the current device as defined in Device
Selection). The next runtime function call made by any host thread that has this device
as current will create a new primary context for this device.
3.2.2. Device Memory
As mentioned in Heterogeneous Programming, the CUDA programming model
assumes a system composed of a host and a device, each with their own separate
memory. Kernels operate out of device memory, so the runtime provides functions to
www.nvidia.com
CUDA C Programming Guide
PG-02829-001_v9.1 | 20
Programming Interface
allocate, deallocate, and copy device memory, as well as transfer data between host
memory and device memory.
Device memory can be allocated either as linear memory or as CUDA arrays.
CUDA arrays are opaque memory layouts optimized for texture fetching. They are
described in Texture and Surface Memory.
Linear memory exists on the device in a 40-bit address space, so separately allocated
entities can reference one another via pointers, for example, in a binary tree.
Linear memory is typically allocated using cudaMalloc() and freed using cudaFree()
and data transfer between host memory and device memory are typically done using
www.nvidia.com
CUDA C Programming Guide
PG-02829-001_v9.1 | 21
Programming Interface
cudaMemcpy(). In the vector addition code sample of Kernels, the vectors need to be
copied from host memory to device memory:
// Device code
__global__ void VecAdd(float* A, float* B, float* C, int N)
{
int i = blockDim.x * blockIdx.x + threadIdx.x;
if (i < N)
C[i] = A[i] + B[i];
}
// Host code
int main()
{
int N = ...;
size_t size = N * sizeof(float);
// Allocate input vectors h_A and h_B in host memory
float* h_A = (float*)malloc(size);
float* h_B = (float*)malloc(size);
// Initialize input vectors
...
// Allocate vectors in device memory
float* d_A;
cudaMalloc(&d_A, size);
float* d_B;
cudaMalloc(&d_B, size);
float* d_C;
cudaMalloc(&d_C, size);
// Copy vectors from host memory to device memory
cudaMemcpy(d_A, h_A, size, cudaMemcpyHostToDevice);
cudaMemcpy(d_B, h_B, size, cudaMemcpyHostToDevice);
// Invoke kernel
int threadsPerBlock = 256;
int blocksPerGrid =
(N + threadsPerBlock - 1) / threadsPerBlock;
VecAdd<<>>(d_A, d_B, d_C, N);
// Copy result from device memory to host memory
// h_C contains the result in host memory
cudaMemcpy(h_C, d_C, size, cudaMemcpyDeviceToHost);
// Free device memory
cudaFree(d_A);
cudaFree(d_B);
cudaFree(d_C);
}
// Free host memory
...
Linear memory can also be allocated through cudaMallocPitch() and
cudaMalloc3D(). These functions are recommended for allocations of 2D or 3D
arrays as it makes sure that the allocation is appropriately padded to meet the
alignment requirements described in Device Memory Accesses, therefore ensuring best
performance when accessing the row addresses or performing copies between 2D arrays
and other regions of device memory (using the cudaMemcpy2D() and cudaMemcpy3D()
functions). The returned pitch (or stride) must be used to access array elements. The
www.nvidia.com
CUDA C Programming Guide
PG-02829-001_v9.1 | 22
Programming Interface
following code sample allocates a width x height 2D array of floating-point values and
shows how to loop over the array elements in device code:
// Host code
int width = 64, height = 64;
float* devPtr;
size_t pitch;
cudaMallocPitch(&devPtr, &pitch,
width * sizeof(float), height);
MyKernel<<<100, 512>>>(devPtr, pitch, width, height);
// Device code
__global__ void MyKernel(float* devPtr,
size_t pitch, int width, int height)
{
for (int r = 0; r < height; ++r) {
float* row = (float*)((char*)devPtr + r * pitch);
for (int c = 0; c < width; ++c) {
float element = row[c];
}
}
}
The following code sample allocates a width x height x depth 3D array of floatingpoint values and shows how to loop over the array elements in device code:
// Host code
int width = 64, height = 64, depth = 64;
cudaExtent extent = make_cudaExtent(width * sizeof(float),
height, depth);
cudaPitchedPtr devPitchedPtr;
cudaMalloc3D(&devPitchedPtr, extent);
MyKernel<<<100, 512>>>(devPitchedPtr, width, height, depth);
// Device code
__global__ void MyKernel(cudaPitchedPtr devPitchedPtr,
int width, int height, int depth)
{
char* devPtr = devPitchedPtr.ptr;
size_t pitch = devPitchedPtr.pitch;
size_t slicePitch = pitch * height;
for (int z = 0; z < depth; ++z) {
char* slice = devPtr + z * slicePitch;
for (int y = 0; y < height; ++y) {
float* row = (float*)(slice + y * pitch);
for (int x = 0; x < width; ++x) {
float element = row[x];
}
}
}
}
The reference manual lists all the various functions used to copy memory between
linear memory allocated with cudaMalloc(), linear memory allocated with
cudaMallocPitch() or cudaMalloc3D(), CUDA arrays, and memory allocated for
variables declared in global or constant memory space.
www.nvidia.com
CUDA C Programming Guide
PG-02829-001_v9.1 | 23
Programming Interface
The following code sample illustrates various ways of accessing global variables via the
runtime API:
__constant__ float constData[256];
float data[256];
cudaMemcpyToSymbol(constData, data, sizeof(data));
cudaMemcpyFromSymbol(data, constData, sizeof(data));
__device__ float devData;
float value = 3.14f;
cudaMemcpyToSymbol(devData, &value, sizeof(float));
__device__ float* devPointer;
float* ptr;
cudaMalloc(&ptr, 256 * sizeof(float));
cudaMemcpyToSymbol(devPointer, &ptr, sizeof(ptr));
cudaGetSymbolAddress() is used to retrieve the address pointing to the memory
allocated for a variable declared in global memory space. The size of the allocated
memory is obtained through cudaGetSymbolSize().
3.2.3. Shared Memory
As detailed in Variable Memory Space Specifiers shared memory is allocated using the
__shared__ memory space specifier.
Shared memory is expected to be much faster than global memory as mentioned in
Thread Hierarchy and detailed in Shared Memory. Any opportunity to replace global
memory accesses by shared memory accesses should therefore be exploited as illustrated
by the following matrix multiplication example.
The following code sample is a straightforward implementation of matrix multiplication
that does not take advantage of shared memory. Each thread reads one row of A and one
www.nvidia.com
CUDA C Programming Guide
PG-02829-001_v9.1 | 24
Programming Interface
column of B and computes the corresponding element of C as illustrated in Figure 9. A is
therefore read B.width times from global memory and B is read A.height times.
// Matrices are stored in row-major order:
// M(row, col) = *(M.elements + row * M.width + col)
typedef struct {
int width;
int height;
float* elements;
} Matrix;
// Thread block size
#define BLOCK_SIZE 16
// Forward declaration of the matrix multiplication kernel
__global__ void MatMulKernel(const Matrix, const Matrix, Matrix);
// Matrix multiplication - Host code
// Matrix dimensions are assumed to be multiples of BLOCK_SIZE
void MatMul(const Matrix A, const Matrix B, Matrix C)
{
// Load A and B to device memory
Matrix d_A;
d_A.width = A.width; d_A.height = A.height;
size_t size = A.width * A.height * sizeof(float);
cudaMalloc(&d_A.elements, size);
cudaMemcpy(d_A.elements, A.elements, size,
cudaMemcpyHostToDevice);
Matrix d_B;
d_B.width = B.width; d_B.height = B.height;
size = B.width * B.height * sizeof(float);
cudaMalloc(&d_B.elements, size);
cudaMemcpy(d_B.elements, B.elements, size,
cudaMemcpyHostToDevice);
// Allocate C in device memory
Matrix d_C;
d_C.width = C.width; d_C.height = C.height;
size = C.width * C.height * sizeof(float);
cudaMalloc(&d_C.elements, size);
// Invoke kernel
dim3 dimBlock(BLOCK_SIZE, BLOCK_SIZE);
dim3 dimGrid(B.width / dimBlock.x, A.height / dimBlock.y);
MatMulKernel<<>>(d_A, d_B, d_C);
// Read C from device memory
cudaMemcpy(C.elements, Cd.elements, size,
cudaMemcpyDeviceToHost);
}
// Free device memory
cudaFree(d_A.elements);
cudaFree(d_B.elements);
cudaFree(d_C.elements);
// Matrix multiplication kernel called by MatMul()
__global__ void MatMulKernel(Matrix A, Matrix B, Matrix C)
{
// Each thread computes one element of C
// by accumulating results into Cvalue
float Cvalue = 0;
int row = blockIdx.y * blockDim.y + threadIdx.y;
int col = blockIdx.x * blockDim.x + threadIdx.x;
for (int e = 0; e < A.width; ++e)
Cvalue += A.elements[row * A.width + e]
* B.elements[e * B.width + col];
C.elements[row * C.width + col] = Cvalue;
}
www.nvidia.com
CUDA C Programming Guide
PG-02829-001_v9.1 | 25
B. w idt h-1
Programming Interface
col
0
B. height
B
0
A
A.height
C
r ow
A.w idt h
B. w idt h
A.height -1
Figure 9 Matrix Multiplication without Shared Memory
The following code sample is an implementation of matrix multiplication that does take
advantage of shared memory. In this implementation, each thread block is responsible
for computing one square sub-matrix Csub of C and each thread within the block is
responsible for computing one element of Csub. As illustrated in Figure 10, Csub is equal
to the product of two rectangular matrices: the sub-matrix of A of dimension (A.width,
block_size) that has the same row indices as Csub, and the sub-matrix of B of dimension
(block_size, A.width )that has the same column indices as Csub. In order to fit into the
device's resources, these two rectangular matrices are divided into as many square
matrices of dimension block_size as necessary and Csub is computed as the sum of the
products of these square matrices. Each of these products is performed by first loading
the two corresponding square matrices from global memory to shared memory with one
thread loading one element of each matrix, and then by having each thread compute one
element of the product. Each thread accumulates the result of each of these products into
a register and once done writes the result to global memory.
www.nvidia.com
CUDA C Programming Guide
PG-02829-001_v9.1 | 26
Programming Interface
By blocking the computation this way, we take advantage of fast shared memory and
save a lot of global memory bandwidth since A is only read (B.width / block_size) times
from global memory and B is read (A.height / block_size) times.
The Matrix type from the previous code sample is augmented with a stride field, so that
sub-matrices can be efficiently represented with the same type. __device__ functions are
used to get and set elements and build any sub-matrix from a matrix.
// Matrices are stored in row-major order:
// M(row, col) = *(M.elements + row * M.stride + col)
typedef struct {
int width;
int height;
int stride;
float* elements;
} Matrix;
// Get a matrix element
__device__ float GetElement(const Matrix A, int row, int col)
{
return A.elements[row * A.stride + col];
}
// Set a matrix element
__device__ void SetElement(Matrix A, int row, int col,
float value)
{
A.elements[row * A.stride + col] = value;
}
// Get the BLOCK_SIZExBLOCK_SIZE sub-matrix Asub of A that is
// located col sub-matrices to the right and row sub-matrices down
// from the upper-left corner of A
__device__ Matrix GetSubMatrix(Matrix A, int row, int col)
{
Matrix Asub;
Asub.width
= BLOCK_SIZE;
Asub.height
= BLOCK_SIZE;
Asub.stride
= A.stride;
Asub.elements = &A.elements[A.stride * BLOCK_SIZE * row
+ BLOCK_SIZE * col];
return Asub;
}
// Thread block size
#define BLOCK_SIZE 16
// Forward declaration of the matrix multiplication kernel
__global__ void MatMulKernel(const Matrix, const Matrix, Matrix);
// Matrix multiplication - Host code
// Matrix dimensions are assumed to be multiples of BLOCK_SIZE
void MatMul(const Matrix A, const Matrix B, Matrix C)
{
// Load A and B to device memory
Matrix d_A;
d_A.width = d_A.stride = A.width; d_A.height = A.height;
size_t size = A.width * A.height * sizeof(float);
cudaMalloc(&d_A.elements, size);
cudaMemcpy(d_A.elements, A.elements, size,
cudaMemcpyHostToDevice);
Matrix d_B;
d_B.width = d_B.stride = B.width; d_B.height = B.height;
size = B.width * B.height * sizeof(float);
www.nvidia.com
CUDA C Programming Guide
PG-02829-001_v9.1 | 27
Programming Interface
cudaMalloc(&d_B.elements, size);
cudaMemcpy(d_B.elements, B.elements, size,
cudaMemcpyHostToDevice);
// Allocate C in device memory
Matrix d_C;
d_C.width = d_C.stride = C.width; d_C.height = C.height;
size = C.width * C.height * sizeof(float);
cudaMalloc(&d_C.elements, size);
// Invoke kernel
dim3 dimBlock(BLOCK_SIZE, BLOCK_SIZE);
dim3 dimGrid(B.width / dimBlock.x, A.height / dimBlock.y);
MatMulKernel<<>>(d_A, d_B, d_C);
// Read C from device memory
cudaMemcpy(C.elements, d_C.elements, size,
cudaMemcpyDeviceToHost);
}
// Free device memory
cudaFree(d_A.elements);
cudaFree(d_B.elements);
cudaFree(d_C.elements);
// Matrix multiplication kernel called by MatMul()
__global__ void MatMulKernel(Matrix A, Matrix B, Matrix C)
{
// Block row and column
int blockRow = blockIdx.y;
int blockCol = blockIdx.x;
// Each thread block computes one sub-matrix Csub of C
Matrix Csub = GetSubMatrix(C, blockRow, blockCol);
// Each thread computes one element of Csub
// by accumulating results into Cvalue
float Cvalue = 0;
// Thread row and column within Csub
int row = threadIdx.y;
int col = threadIdx.x;
// Loop over all the sub-matrices of A and B that are
// required to compute Csub
// Multiply each pair of sub-matrices together
// and accumulate the results
for (int m = 0; m < (A.width / BLOCK_SIZE); ++m) {
// Get sub-matrix Asub of A
Matrix Asub = GetSubMatrix(A, blockRow, m);
// Get sub-matrix Bsub of B
Matrix Bsub = GetSubMatrix(B, m, blockCol);
// Shared memory used to store Asub and Bsub respectively
__shared__ float As[BLOCK_SIZE][BLOCK_SIZE];
__shared__ float Bs[BLOCK_SIZE][BLOCK_SIZE];
// Load Asub and Bsub from device memory to shared memory
// Each thread loads one element of each sub-matrix
As[row][col] = GetElement(Asub, row, col);
Bs[row][col] = GetElement(Bsub, row, col);
// Synchronize to make sure the sub-matrices are loaded
// before starting the computation
__syncthreads();
www.nvidia.com
CUDA C Programming Guide
PG-02829-001_v9.1 | 28
Programming Interface
// Multiply Asub and Bsub together
for (int e = 0; e < BLOCK_SIZE; ++e)
Cvalue += As[row][e] * Bs[e][col];
// Synchronize to make sure that the preceding
// computation is done before loading two new
// sub-matrices of A and B in the next iteration
__syncthreads();
}
}
// Write Csub to device memory
// Each thread writes one element
SetElement(Csub, row, col, Cvalue);
col
0
0
block Row
Csub
r ow
BLOCK_SI ZE-1
BLOCK_ SI ZE
BLOCK_ SI ZE
A.w idt h
A.height
C
BLOCK _SI ZE
A
BLOCK_SI ZE-1
BLOCK _SI ZE
B
B. height
BLOCK _SI ZE
block Col
BLOCK_SI ZE
B. w idt h
Figure 10 Matrix Multiplication with Shared Memory
3.2.4. Page-Locked Host Memory
The runtime provides functions to allow the use of page-locked (also known as pinned)
host memory (as opposed to regular pageable host memory allocated by malloc()):
‣
cudaHostAlloc() and cudaFreeHost() allocate and free page-locked host
memory;
www.nvidia.com
CUDA C Programming Guide
PG-02829-001_v9.1 | 29
Programming Interface
‣
cudaHostRegister() page-locks a range of memory allocated by malloc() (see
reference manual for limitations).
Using page-locked host memory has several benefits:
‣
‣
‣
Copies between page-locked host memory and device memory can be performed
concurrently with kernel execution for some devices as mentioned in Asynchronous
Concurrent Execution.
On some devices, page-locked host memory can be mapped into the address space
of the device, eliminating the need to copy it to or from device memory as detailed
in Mapped Memory.
On systems with a front-side bus, bandwidth between host memory and device
memory is higher if host memory is allocated as page-locked and even higher if
in addition it is allocated as write-combining as described in Write-Combining
Memory.
Page-locked host memory is a scarce resource however, so allocations in page-locked
memory will start failing long before allocations in pageable memory. In addition, by
reducing the amount of physical memory available to the operating system for paging,
consuming too much page-locked memory reduces overall system performance.
The simple zero-copy CUDA sample comes with a detailed document on the pagelocked memory APIs.
3.2.4.1. Portable Memory
A block of page-locked memory can be used in conjunction with any device in the
system (see Multi-Device System for more details on multi-device systems), but by
default, the benefits of using page-locked memory described above are only available in
conjunction with the device that was current when the block was allocated (and with all
devices sharing the same unified address space, if any, as described in Unified Virtual
Address Space). To make these advantages available to all devices, the block needs to be
allocated by passing the flag cudaHostAllocPortable to cudaHostAlloc() or pagelocked by passing the flag cudaHostRegisterPortable to cudaHostRegister().
3.2.4.2. Write-Combining Memory
By default page-locked host memory is allocated as cacheable. It can optionally be
allocated as write-combining instead by passing flag cudaHostAllocWriteCombined
to cudaHostAlloc(). Write-combining memory frees up the host's L1 and L2 cache
resources, making more cache available to the rest of the application. In addition, writecombining memory is not snooped during transfers across the PCI Express bus, which
can improve transfer performance by up to 40%.
Reading from write-combining memory from the host is prohibitively slow, so writecombining memory should in general be used for memory that the host only writes to.
3.2.4.3. Mapped Memory
A block of page-locked host memory can also be mapped into the address space
of the device by passing flag cudaHostAllocMapped to cudaHostAlloc() or by
passing flag cudaHostRegisterMapped to cudaHostRegister(). Such a block
www.nvidia.com
CUDA C Programming Guide
PG-02829-001_v9.1 | 30
Programming Interface
has therefore in general two addresses: one in host memory that is returned by
cudaHostAlloc() or malloc(), and one in device memory that can be retrieved
using cudaHostGetDevicePointer() and then used to access the block from within a
kernel. The only exception is for pointers allocated with cudaHostAlloc() and when a
unified address space is used for the host and the device as mentioned in Unified Virtual
Address Space.
Accessing host memory directly from within a kernel has several advantages:
‣
‣
There is no need to allocate a block in device memory and copy data between this
block and the block in host memory; data transfers are implicitly performed as
needed by the kernel;
There is no need to use streams (see Concurrent Data Transfers) to overlap data
transfers with kernel execution; the kernel-originated data transfers automatically
overlap with kernel execution.
Since mapped page-locked memory is shared between host and device however,
the application must synchronize memory accesses using streams or events (see
Asynchronous Concurrent Execution) to avoid any potential read-after-write, writeafter-read, or write-after-write hazards.
To be able to retrieve the device pointer to any mapped page-locked memory, pagelocked memory mapping must be enabled by calling cudaSetDeviceFlags() with
the cudaDeviceMapHost flag before any other CUDA call is performed. Otherwise,
cudaHostGetDevicePointer() will return an error.
cudaHostGetDevicePointer() also returns an error if the device does not support
mapped page-locked host memory. Applications may query this capability by checking
the canMapHostMemory device property (see Device Enumeration), which is equal to 1
for devices that support mapped page-locked host memory.
Note that atomic functions (see Atomic Functions) operating on mapped page-locked
memory are not atomic from the point of view of the host or other devices.
Also note that CUDA runtime requires that 1-byte, 2-byte, 4-byte, and 8-byte naturally
aligned loads and stores to host memory initiated from the device are preserved as
single accesses from the point of view of the host and other devices. On some platforms,
atomics to memory may be broken by the hardware into separate load and store
operations. These component load and store operations have the same requirements on
preservation of naturally aligned accesses. As an example, the CUDA runtime does not
support a PCI Express bus topology where a PCI Express bridge splits 8-byte naturally
aligned writes into two 4-byte writes between the device and the host.
3.2.5. Asynchronous Concurrent Execution
CUDA exposes the following operations as independent tasks that can operate
concurrently with one another:
‣
‣
‣
‣
Computation on the host;
Computation on the device;
Memory transfers from the host to the device;
Memory transfers from the device to the host;
www.nvidia.com
CUDA C Programming Guide
PG-02829-001_v9.1 | 31
Programming Interface
‣
‣
Memory transfers within the memory of a given device;
Memory transfers among devices.
The level of concurrency achieved between these operations will depend on the feature
set and compute capability of the device as described below.
3.2.5.1. Concurrent Execution between Host and Device
Concurrent host execution is facilitated through asynchronous library functions that
return control to the host thread before the device completes the requested task. Using
asynchronous calls, many device operations can be queued up together to be executed
by the CUDA driver when appropriate device resources are available. This relieves the
host thread of much of the responsibility to manage the device, leaving it free for other
tasks. The following device operations are asynchronous with respect to the host:
‣
‣
‣
‣
‣
Kernel launches;
Memory copies within a single device's memory;
Memory copies from host to device of a memory block of 64 KB or less;
Memory copies performed by functions that are suffixed with Async;
Memory set function calls.
Programmers can globally disable asynchronicity of kernel launches for all CUDA
applications running on a system by setting the CUDA_LAUNCH_BLOCKING environment
variable to 1. This feature is provided for debugging purposes only and should not be
used as a way to make production software run reliably.
Kernel launches are synchronous if hardware counters are collected via a profiler
(Nsight, Visual Profiler) unless concurrent kernel profiling is enabled. Async memory
copies will also be synchronous if they involve host memory that is not page-locked.
3.2.5.2. Concurrent Kernel Execution
Some devices of compute capability 2.x and higher can execute multiple
kernels concurrently. Applications may query this capability by checking the
concurrentKernels device property (see Device Enumeration), which is equal to 1 for
devices that support it.
The maximum number of kernel launches that a device can execute concurrently
depends on its compute capability and is listed in Table 14.
A kernel from one CUDA context cannot execute concurrently with a kernel from
another CUDA context.
Kernels that use many textures or a large amount of local memory are less likely to
execute concurrently with other kernels.
3.2.5.3. Overlap of Data Transfer and Kernel Execution
Some devices can perform an asynchronous memory copy to or from the GPU
concurrently with kernel execution. Applications may query this capability by checking
the asyncEngineCount device property (see Device Enumeration), which is greater
www.nvidia.com
CUDA C Programming Guide
PG-02829-001_v9.1 | 32
Programming Interface
than zero for devices that support it. If host memory is involved in the copy, it must be
page-locked.
It is also possible to perform an intra-device copy simultaneously with kernel execution
(on devices that support the concurrentKernels device property) and/or with copies
to or from the device (for devices that support the asyncEngineCount property). Intradevice copies are initiated using the standard memory copy functions with destination
and source addresses residing on the same device.
3.2.5.4. Concurrent Data Transfers
Some devices of compute capability 2.x and higher can overlap copies to and from the
device. Applications may query this capability by checking the asyncEngineCount
device property (see Device Enumeration), which is equal to 2 for devices that support
it. In order to be overlapped, any host memory involved in the transfers must be pagelocked.
3.2.5.5. Streams
Applications manage the concurrent operations described above through streams. A
stream is a sequence of commands (possibly issued by different host threads) that
execute in order. Different streams, on the other hand, may execute their commands out
of order with respect to one another or concurrently; this behavior is not guaranteed and
should therefore not be relied upon for correctness (e.g., inter-kernel communication is
undefined).
3.2.5.5.1. Creation and Destruction
A stream is defined by creating a stream object and specifying it as the stream parameter
to a sequence of kernel launches and host <-> device memory copies. The following
code sample creates two streams and allocates an array hostPtr of float in pagelocked memory.
cudaStream_t stream[2];
for (int i = 0; i < 2; ++i)
cudaStreamCreate(&stream[i]);
float* hostPtr;
cudaMallocHost(&hostPtr, 2 * size);
Each of these streams is defined by the following code sample as a sequence of one
memory copy from host to device, one kernel launch, and one memory copy from device
to host:
for (int i = 0; i < 2; ++i) {
cudaMemcpyAsync(inputDevPtr + i * size, hostPtr + i * size,
size, cudaMemcpyHostToDevice, stream[i]);
MyKernel <<<100, 512, 0, stream[i]>>>
(outputDevPtr + i * size, inputDevPtr + i * size, size);
cudaMemcpyAsync(hostPtr + i * size, outputDevPtr + i * size,
size, cudaMemcpyDeviceToHost, stream[i]);
}
Each stream copies its portion of input array hostPtr to array inputDevPtr in device
memory, processes inputDevPtr on the device by calling MyKernel(), and copies
the result outputDevPtr back to the same portion of hostPtr. Overlapping Behavior
describes how the streams overlap in this example depending on the capability of the
www.nvidia.com
CUDA C Programming Guide
PG-02829-001_v9.1 | 33
Programming Interface
device. Note that hostPtr must point to page-locked host memory for any overlap to
occur.
Streams are released by calling cudaStreamDestroy().
for (int i = 0; i < 2; ++i)
cudaStreamDestroy(stream[i]);
In case the device is still doing work in the stream when cudaStreamDestroy() is
called, the function will return immediately and the resources associated with the stream
will be released automatically once the device has completed all work in the stream.
3.2.5.5.2. Default Stream
Kernel launches and host <-> device memory copies that do not specify any stream
parameter, or equivalently that set the stream parameter to zero, are issued to the default
stream. They are therefore executed in order.
For code that is compiled using the --default-stream per-thread compilation flag
(or that defines the CUDA_API_PER_THREAD_DEFAULT_STREAM macro before including
CUDA headers (cuda.h and cuda_runtime.h)), the default stream is a regular stream
and each host thread has its own default stream.
For code that is compiled using the --default-stream legacy compilation flag, the
default stream is a special stream called the NULL stream and each device has a single
NULL stream used for all host threads. The NULL stream is special as it causes implicit
synchronization as described in Implicit Synchronization.
For code that is compiled without specifying a --default-stream compilation flag, -default-stream legacy is assumed as the default.
3.2.5.5.3. Explicit Synchronization
There are various ways to explicitly synchronize streams with each other.
cudaDeviceSynchronize() waits until all preceding commands in all streams of all
host threads have completed.
cudaStreamSynchronize()takes a stream as a parameter and waits until all preceding
commands in the given stream have completed. It can be used to synchronize the host
with a specific stream, allowing other streams to continue executing on the device.
cudaStreamWaitEvent()takes a stream and an event as parameters (see Events for
a description of events)and makes all the commands added to the given stream after
the call to cudaStreamWaitEvent()delay their execution until the given event has
completed. The stream can be 0, in which case all the commands added to any stream
after the call to cudaStreamWaitEvent()wait on the event.
cudaStreamQuery()provides applications with a way to know if all preceding
commands in a stream have completed.
To avoid unnecessary slowdowns, all these synchronization functions are usually best
used for timing purposes or to isolate a launch or memory copy that is failing.
www.nvidia.com
CUDA C Programming Guide
PG-02829-001_v9.1 | 34
Programming Interface
3.2.5.5.4. Implicit Synchronization
Two commands from different streams cannot run concurrently if any one of the
following operations is issued in-between them by the host thread:
‣
‣
‣
‣
‣
‣
a page-locked host memory allocation,
a device memory allocation,
a device memory set,
a memory copy between two addresses to the same device memory,
any CUDA command to the NULL stream,
a switch between the L1/shared memory configurations described in Compute
Capability 3.x and Compute Capability 7.x.
For devices that support concurrent kernel execution and are of compute capability 3.0
or lower, any operation that requires a dependency check to see if a streamed kernel
launch is complete:
‣
‣
Can start executing only when all thread blocks of all prior kernel launches from any
stream in the CUDA context have started executing;
Blocks all later kernel launches from any stream in the CUDA context until the
kernel launch being checked is complete.
Operations that require a dependency check include any other commands within the
same stream as the launch being checked and any call to cudaStreamQuery() on that
stream. Therefore, applications should follow these guidelines to improve their potential
for concurrent kernel execution:
‣
‣
All independent operations should be issued before dependent operations,
Synchronization of any kind should be delayed as long as possible.
3.2.5.5.5. Overlapping Behavior
The amount of execution overlap between two streams depends on the order in which
the commands are issued to each stream and whether or not the device supports
overlap of data transfer and kernel execution (see Overlap of Data Transfer and Kernel
Execution), concurrent kernel execution (see Concurrent Kernel Execution), and/or
concurrent data transfers (see Concurrent Data Transfers).
For example, on devices that do not support concurrent data transfers, the two streams
of the code sample of Creation and Destruction do not overlap at all because the
memory copy from host to device is issued to stream[1] after the memory copy from
device to host is issued to stream[0], so it can only start once the memory copy from
device to host issued to stream[0] has completed. If the code is rewritten the following
way (and assuming the device supports overlap of data transfer and kernel execution)
for (int i = 0; i < 2; ++i)
cudaMemcpyAsync(inputDevPtr + i * size, hostPtr + i * size,
size, cudaMemcpyHostToDevice, stream[i]);
for (int i = 0; i < 2; ++i)
MyKernel<<<100, 512, 0, stream[i]>>>
(outputDevPtr + i * size, inputDevPtr + i * size, size);
for (int i = 0; i < 2; ++i)
cudaMemcpyAsync(hostPtr + i * size, outputDevPtr + i * size,
size, cudaMemcpyDeviceToHost, stream[i]);
www.nvidia.com
CUDA C Programming Guide
PG-02829-001_v9.1 | 35
Programming Interface
then the memory copy from host to device issued to stream[1] overlaps with the kernel
launch issued to stream[0].
On devices that do support concurrent data transfers, the two streams of the code
sample of Creation and Destruction do overlap: The memory copy from host to device
issued to stream[1] overlaps with the memory copy from device to host issued to
stream[0] and even with the kernel launch issued to stream[0] (assuming the device
supports overlap of data transfer and kernel execution). However, for devices of
compute capability 3.0 or lower, the kernel executions cannot possibly overlap because
the second kernel launch is issued to stream[1] after the memory copy from device
to host is issued to stream[0], so it is blocked until the first kernel launch issued to
stream[0] is complete as per Implicit Synchronization. If the code is rewritten as
above, the kernel executions overlap (assuming the device supports concurrent kernel
execution) since the second kernel launch is issued to stream[1] before the memory copy
from device to host is issued to stream[0]. In that case however, the memory copy from
device to host issued to stream[0] only overlaps with the last thread blocks of the kernel
launch issued to stream[1] as per Implicit Synchronization, which can represent only a
small portion of the total execution time of the kernel.
3.2.5.5.6. Callbacks
The runtime provides a way to insert a callback at any point into a stream via
cudaStreamAddCallback(). A callback is a function that is executed on the host once
all commands issued to the stream before the callback have completed. Callbacks in
stream 0 are executed once all preceding tasks and commands issued in all streams
before the callback have completed.
The following code sample adds the callback function MyCallback to each of two
streams after issuing a host-to-device memory copy, a kernel launch and a device-to-host
memory copy into each stream. The callback will begin execution on the host after each
of the device-to-host memory copies completes.
void CUDART_CB MyCallback(cudaStream_t stream, cudaError_t status, void *data){
printf("Inside callback %d\n", (size_t)data);
}
...
for (size_t i = 0; i < 2; ++i) {
cudaMemcpyAsync(devPtrIn[i], hostPtr[i], size, cudaMemcpyHostToDevice,
stream[i]);
MyKernel<<<100, 512, 0, stream[i]>>>(devPtrOut[i], devPtrIn[i], size);
cudaMemcpyAsync(hostPtr[i], devPtrOut[i], size, cudaMemcpyDeviceToHost,
stream[i]);
cudaStreamAddCallback(stream[i], MyCallback, (void*)i, 0);
}
The commands that are issued in a stream (or all commands issued to any stream if the
callback is issued to stream 0) after a callback do not start executing before the callback
has completed. The last parameter of cudaStreamAddCallback() is reserved for future
use.
A callback must not make CUDA API calls (directly or indirectly), as it might end up
waiting on itself if it makes such a call leading to a deadlock.
www.nvidia.com
CUDA C Programming Guide
PG-02829-001_v9.1 | 36
Programming Interface
3.2.5.5.7. Stream Priorities
The relative priorities of streams can be specified at creation using
cudaStreamCreateWithPriority(). The range of allowable priorities,
ordered as [ highest priority, lowest priority ] can be obtained using the
cudaDeviceGetStreamPriorityRange() function. At runtime, as blocks in lowpriority schemes finish, waiting blocks in higher-priority streams are scheduled in their
place.
The following code sample obtains the allowable range of priorities for the current
device, and creates streams with the highest and lowest available priorities
// get the range of stream priorities for this device
int priority_high, priority_low;
cudaDeviceGetStreamPriorityRange(&priority_low, &priority_high);
// create streams with highest and lowest available priorities
cudaStream_t st_high, st_low;
cudaStreamCreateWithPriority(&st_high, cudaStreamNonBlocking, priority_high);
cudaStreamCreateWithPriority(&st_low, cudaStreamNonBlocking, priority_low);
3.2.5.6. Events
The runtime also provides a way to closely monitor the device's progress, as well as
perform accurate timing, by letting the application asynchronously record events at
any point in the program and query when these events are completed. An event has
completed when all tasks - or optionally, all commands in a given stream - preceding the
event have completed. Events in stream zero are completed after all preceding tasks and
commands in all streams are completed.
3.2.5.6.1. Creation and Destruction
The following code sample creates two events:
cudaEvent_t start, stop;
cudaEventCreate(&start);
cudaEventCreate(&stop);
They are destroyed this way:
cudaEventDestroy(start);
cudaEventDestroy(stop);
3.2.5.6.2. Elapsed Time
The events created in Creation and Destruction can be used to time the code sample of
Creation and Destruction the following way:
cudaEventRecord(start, 0);
for (int i = 0; i < 2; ++i) {
cudaMemcpyAsync(inputDev + i * size, inputHost + i * size,
size, cudaMemcpyHostToDevice, stream[i]);
MyKernel<<<100, 512, 0, stream[i]>>>
(outputDev + i * size, inputDev + i * size, size);
cudaMemcpyAsync(outputHost + i * size, outputDev + i * size,
size, cudaMemcpyDeviceToHost, stream[i]);
}
cudaEventRecord(stop, 0);
cudaEventSynchronize(stop);
float elapsedTime;
cudaEventElapsedTime(&elapsedTime, start, stop);
www.nvidia.com
CUDA C Programming Guide
PG-02829-001_v9.1 | 37
Programming Interface
3.2.5.7. Synchronous Calls
When a synchronous function is called, control is not returned to the host thread before
the device has completed the requested task. Whether the host thread will then yield,
block, or spin can be specified by calling cudaSetDeviceFlags()with some specific
flags (see reference manual for details) before any other CUDA call is performed by the
host thread.
3.2.6. Multi-Device System
3.2.6.1. Device Enumeration
A host system can have multiple devices. The following code sample shows how to
enumerate these devices, query their properties, and determine the number of CUDAenabled devices.
int deviceCount;
cudaGetDeviceCount(&deviceCount);
int device;
for (device = 0; device < deviceCount; ++device) {
cudaDeviceProp deviceProp;
cudaGetDeviceProperties(&deviceProp, device);
printf("Device %d has compute capability %d.%d.\n",
device, deviceProp.major, deviceProp.minor);
}
3.2.6.2. Device Selection
A host thread can set the device it operates on at any time by calling cudaSetDevice().
Device memory allocations and kernel launches are made on the currently set device;
streams and events are created in association with the currently set device. If no call to
cudaSetDevice() is made, the current device is device 0.
The following code sample illustrates how setting the current device affects memory
allocation and kernel execution.
size_t size = 1024 * sizeof(float);
cudaSetDevice(0);
// Set device 0 as current
float* p0;
cudaMalloc(&p0, size);
// Allocate memory on device 0
MyKernel<<<1000, 128>>>(p0); // Launch kernel on device 0
cudaSetDevice(1);
// Set device 1 as current
float* p1;
cudaMalloc(&p1, size);
// Allocate memory on device 1
MyKernel<<<1000, 128>>>(p1); // Launch kernel on device 1
www.nvidia.com
CUDA C Programming Guide
PG-02829-001_v9.1 | 38
Programming Interface
3.2.6.3. Stream and Event Behavior
A kernel launch will fail if it is issued to a stream that is not associated to the current
device as illustrated in the following code sample.
cudaSetDevice(0);
// Set device 0 as current
cudaStream_t s0;
cudaStreamCreate(&s0);
// Create stream s0 on device 0
MyKernel<<<100, 64, 0, s0>>>(); // Launch kernel on device 0 in s0
cudaSetDevice(1);
// Set device 1 as current
cudaStream_t s1;
cudaStreamCreate(&s1);
// Create stream s1 on device 1
MyKernel<<<100, 64, 0, s1>>>(); // Launch kernel on device 1 in s1
// This kernel launch will fail:
MyKernel<<<100, 64, 0, s0>>>(); // Launch kernel on device 1 in s0
A memory copy will succeed even if it is issued to a stream that is not associated to the
current device.
cudaEventRecord() will fail if the input event and input stream are associated to
different devices.
cudaEventElapsedTime() will fail if the two input events are associated to different
devices.
cudaEventSynchronize() and cudaEventQuery() will succeed even if the input
event is associated to a device that is different from the current device.
cudaStreamWaitEvent() will succeed even if the input stream and input event are
associated to different devices. cudaStreamWaitEvent() can therefore be used to
synchronize multiple devices with each other.
Each device has its own default stream (see Default Stream), so commands issued to
the default stream of a device may execute out of order or concurrently with respect to
commands issued to the default stream of any other device.
3.2.6.4. Peer-to-Peer Memory Access
When the application is run as a 64-bit process, devices of compute capability 2.0
and higher from the Tesla series may address each other's memory (i.e., a kernel
executing on one device can dereference a pointer to the memory of the other
device). This peer-to-peer memory access feature is supported between two devices if
cudaDeviceCanAccessPeer() returns true for these two devices.
Peer-to-peer memory access must be enabled between two devices by calling
cudaDeviceEnablePeerAccess() as illustrated in the following code sample. Each
device can support a system-wide maximum of eight peer connections.
www.nvidia.com
CUDA C Programming Guide
PG-02829-001_v9.1 | 39
Programming Interface
A unified address space is used for both devices (see Unified Virtual Address Space),
so the same pointer can be used to address memory from both devices as shown in the
code sample below.
cudaSetDevice(0);
float* p0;
size_t size = 1024 * sizeof(float);
cudaMalloc(&p0, size);
MyKernel<<<1000, 128>>>(p0);
cudaSetDevice(1);
cudaDeviceEnablePeerAccess(0, 0);
// Set device 0 as current
//
//
//
//
//
Allocate memory on device 0
Launch kernel on device 0
Set device 1 as current
Enable peer-to-peer access
with device 0
// Launch kernel on device 1
// This kernel launch can access memory on device 0 at address p0
MyKernel<<<1000, 128>>>(p0);
3.2.6.5. Peer-to-Peer Memory Copy
Memory copies can be performed between the memories of two different devices.
When a unified address space is used for both devices (see Unified Virtual Address
Space), this is done using the regular memory copy functions mentioned in Device
Memory.
Otherwise, this is done using cudaMemcpyPeer(), cudaMemcpyPeerAsync(),
cudaMemcpy3DPeer(), or cudaMemcpy3DPeerAsync() as illustrated in the following
code sample.
cudaSetDevice(0);
float* p0;
size_t size = 1024 * sizeof(float);
cudaMalloc(&p0, size);
cudaSetDevice(1);
float* p1;
cudaMalloc(&p1, size);
cudaSetDevice(0);
MyKernel<<<1000, 128>>>(p0);
cudaSetDevice(1);
cudaMemcpyPeer(p1, 1, p0, 0, size);
MyKernel<<<1000, 128>>>(p1);
// Set device 0 as current
// Allocate memory on device 0
// Set device 1 as current
//
//
//
//
//
//
Allocate memory on device 1
Set device 0 as current
Launch kernel on device 0
Set device 1 as current
Copy p0 to p1
Launch kernel on device 1
A copy (in the implicit NULL stream) between the memories of two different devices:
‣
‣
does not start until all commands previously issued to either device have completed
and
runs to completion before any commands (see Asynchronous Concurrent Execution)
issued after the copy to either device can start.
Consistent with the normal behavior of streams, an asynchronous copy between the
memories of two devices may overlap with copies or kernels in another stream.
Note that if peer-to-peer access is enabled between two devices via
cudaDeviceEnablePeerAccess() as described in Peer-to-Peer Memory Access, peerto-peer memory copy between these two devices no longer needs to be staged through
the host and is therefore faster.
www.nvidia.com
CUDA C Programming Guide
PG-02829-001_v9.1 | 40
Programming Interface
3.2.7. Unified Virtual Address Space
When the application is run as a 64-bit process, a single address space is used for
the host and all the devices of compute capability 2.0 and higher. All host memory
allocations made via CUDA API calls and all device memory allocations on supported
devices are within this virtual address range. As a consequence:
‣
‣
‣
The location of any memory on the host allocated through CUDA, or on any of the
devices which use the unified address space, can be determined from the value of
the pointer using cudaPointerGetAttributes().
When copying to or from the memory of any device which uses the unified
address space, the cudaMemcpyKind parameter of cudaMemcpy*() can be set to
cudaMemcpyDefault to determine locations from the pointers. This also works
for host pointers not allocated through CUDA, as long as the current device uses
unified addressing.
Allocations via cudaHostAlloc() are automatically portable (see Portable
Memory) across all the devices for which the unified address space is used, and
pointers returned by cudaHostAlloc() can be used directly from within kernels
running on these devices (i.e., there is no need to obtain a device pointer via
cudaHostGetDevicePointer() as described in Mapped Memory.
Applications may query if the unified address space is used for a particular device by
checking that the unifiedAddressing device property (see Device Enumeration) is
equal to 1.
3.2.8. Interprocess Communication
Any device memory pointer or event handle created by a host thread can be directly
referenced by any other thread within the same process. It is not valid outside this
process however, and therefore cannot be directly referenced by threads belonging to a
different process.
To share device memory pointers and events across processes, an application must
use the Inter Process Communication API, which is described in detail in the reference
manual. The IPC API is only supported for 64-bit processes on Linux and for devices of
compute capability 2.0 and higher.
Using this API, an application can get the IPC handle for a given device memory
pointer using cudaIpcGetMemHandle(), pass it to another process using
standard IPC mechanisms (e.g., interprocess shared memory or files), and use
cudaIpcOpenMemHandle() to retrieve a device pointer from the IPC handle that is a
valid pointer within this other process. Event handles can be shared using similar entry
points.
An example of using the IPC API is where a single master process generates a batch
of input data, making the data available to multiple slave processes without requiring
regeneration or copying.
www.nvidia.com
CUDA C Programming Guide
PG-02829-001_v9.1 | 41
Programming Interface
3.2.9. Error Checking
All runtime functions return an error code, but for an asynchronous function (see
Asynchronous Concurrent Execution), this error code cannot possibly report any of the
asynchronous errors that could occur on the device since the function returns before the
device has completed the task; the error code only reports errors that occur on the host
prior to executing the task, typically related to parameter validation; if an asynchronous
error occurs, it will be reported by some subsequent unrelated runtime function call.
The only way to check for asynchronous errors just after some asynchronous
function call is therefore to synchronize just after the call by calling
cudaDeviceSynchronize() (or by using any other synchronization mechanisms
described in Asynchronous Concurrent Execution) and checking the error code returned
by cudaDeviceSynchronize().
The runtime maintains an error variable for each host thread that is initialized to
cudaSuccess and is overwritten by the error code every time an error occurs (be it
a parameter validation error or an asynchronous error). cudaPeekAtLastError()
returns this variable. cudaGetLastError() returns this variable and resets it to
cudaSuccess.
Kernel launches do not return any error code, so cudaPeekAtLastError() or
cudaGetLastError() must be called just after the kernel launch to retrieve any
pre-launch errors. To ensure that any error returned by cudaPeekAtLastError()
or cudaGetLastError() does not originate from calls prior to the kernel launch,
one has to make sure that the runtime error variable is set to cudaSuccess just before
the kernel launch, for example, by calling cudaGetLastError() just before the
kernel launch. Kernel launches are asynchronous, so to check for asynchronous
errors, the application must synchronize in-between the kernel launch and the call to
cudaPeekAtLastError() or cudaGetLastError().
Note that cudaErrorNotReady that may be returned by cudaStreamQuery() and
cudaEventQuery() is not considered an error and is therefore not reported by
cudaPeekAtLastError() or cudaGetLastError().
3.2.10. Call Stack
On devices of compute capability 2.x and higher, the size of the call stack can be queried
using cudaDeviceGetLimit() and set using cudaDeviceSetLimit().
When the call stack overflows, the kernel call fails with a stack overflow error if the
application is run via a CUDA debugger (cuda-gdb, Nsight) or an unspecified launch
error, otherwise.
3.2.11. Texture and Surface Memory
CUDA supports a subset of the texturing hardware that the GPU uses for graphics
to access texture and surface memory. Reading data from texture or surface memory
instead of global memory can have several performance benefits as described in Device
Memory Accesses.
www.nvidia.com
CUDA C Programming Guide
PG-02829-001_v9.1 | 42
Programming Interface
There are two different APIs to access texture and surface memory:
‣
‣
The texture reference API that is supported on all devices,
The texture object API that is only supported on devices of compute capability 3.x.
The texture reference API has limitations that the texture object API does not have. They
are mentioned in Texture Reference API.
3.2.11.1. Texture Memory
Texture memory is read from kernels using the device functions described in Texture
Functions. The process of reading a texture calling one of these functions is called a
texture fetch. Each texture fetch specifies a parameter called a texture object for the texture
object API or a texture reference for the texture reference API.
The texture object or the texture reference specifies:
‣
‣
‣
‣
‣
The texture, which is the piece of texture memory that is fetched. Texture objects are
created at runtime and the texture is specified when creating the texture object as
described in Texture Object API. Texture references are created at compile time and
the texture is specified at runtime by bounding the texture reference to the texture
through runtime functions as described in Texture Reference API; several distinct
texture references might be bound to the same texture or to textures that overlap in
memory. A texture can be any region of linear memory or a CUDA array (described
in CUDA Arrays).
Its dimensionality that specifies whether the texture is addressed as a one
dimensional array using one texture coordinate, a two-dimensional array using two
texture coordinates, or a three-dimensional array using three texture coordinates.
Elements of the array are called texels, short for texture elements. The texture width,
height, and depth refer to the size of the array in each dimension. Table 14 lists the
maximum texture width, height, and depth depending on the compute capability of
the device.
The type of a texel, which is restricted to the basic integer and single-precision
floating-point types and any of the 1-, 2-, and 4-component vector types defined in
char, short, int, long, longlong, float, double that are derived from the basic integer
and single-precision floating-point types.
The read mode, which is equal to cudaReadModeNormalizedFloat or
cudaReadModeElementType. If it is cudaReadModeNormalizedFloat and the
type of the texel is a 16-bit or 8-bit integer type, the value returned by the texture
fetch is actually returned as floating-point type and the full range of the integer type
is mapped to [0.0, 1.0] for unsigned integer type and [-1.0, 1.0] for signed integer
type; for example, an unsigned 8-bit texture element with the value 0xff reads as 1. If
it is cudaReadModeElementType, no conversion is performed.
Whether texture coordinates are normalized or not. By default, textures
are referenced (by the functions of Texture Functions) using floating-point
coordinates in the range [0, N-1] where N is the size of the texture in the dimension
corresponding to the coordinate. For example, a texture that is 64x32 in size will
be referenced with coordinates in the range [0, 63] and [0, 31] for the x and y
dimensions, respectively. Normalized texture coordinates cause the coordinates
to be specified in the range [0.0, 1.0-1/N] instead of [0, N-1], so the same 64x32
www.nvidia.com
CUDA C Programming Guide
PG-02829-001_v9.1 | 43
Programming Interface
‣
‣
texture would be addressed by normalized coordinates in the range [0, 1-1/N] in
both the x and y dimensions. Normalized texture coordinates are a natural fit to
some applications' requirements, if it is preferable for the texture coordinates to be
independent of the texture size.
The addressing mode. It is valid to call the device functions of Section B.8 with
coordinates that are out of range. The addressing mode defines what happens
in that case. The default addressing mode is to clamp the coordinates to the
valid range: [0, N) for non-normalized coordinates and [0.0, 1.0) for normalized
coordinates. If the border mode is specified instead, texture fetches with outof-range texture coordinates return zero. For normalized coordinates, the wrap
mode and the mirror mode are also available. When using the wrap mode, each
coordinate x is converted to frac(x)=x floor(x) where floor(x) is the largest integer
not greater than x. When using the mirror mode, each coordinate x is converted
to frac(x) if floor(x) is even and 1-frac(x) if floor(x) is odd. The addressing mode is
specified as an array of size three whose first, second, and third elements specify the
addressing mode for the first, second, and third texture coordinates, respectively;
the addressing mode are cudaAddressModeBorder, cudaAddressModeClamp,
cudaAddressModeWrap, and cudaAddressModeMirror; cudaAddressModeWrap
and cudaAddressModeMirror are only supported for normalized texture
coordinates
The filtering mode which specifies how the value returned when fetching the texture
is computed based on the input texture coordinates. Linear texture filtering may be
done only for textures that are configured to return floating-point data. It performs
low-precision interpolation between neighboring texels. When enabled, the texels
surrounding a texture fetch location are read and the return value of the texture
fetch is interpolated based on where the texture coordinates fell between the texels.
Simple linear interpolation is performed for one-dimensional textures, bilinear
interpolation for two-dimensional textures, and trilinear interpolation for threedimensional textures. Texture Fetching gives more details on texture fetching. The
filtering mode is equal to cudaFilterModePoint or cudaFilterModeLinear. If it
is cudaFilterModePoint, the returned value is the texel whose texture coordinates
are the closest to the input texture coordinates. If it is cudaFilterModeLinear, the
returned value is the linear interpolation of the two (for a one-dimensional texture),
four (for a two dimensional texture), or eight (for a three dimensional texture)
texels whose texture coordinates are the closest to the input texture coordinates.
cudaFilterModeLinear is only valid for returned values of floating-point type.
Texture Object API introduces the texture object API.
Texture Reference API introduces the texture reference API.
16-Bit Floating-Point Textures explains how to deal with 16-bit floating-point textures.
Textures can also be layered as described in Layered Textures.
Cubemap Textures and Cubemap Layered Textures describe a special type of texture,
the cubemap texture.
Texture Gather describes a special texture fetch, texture gather.
www.nvidia.com
CUDA C Programming Guide
PG-02829-001_v9.1 | 44
Programming Interface
3.2.11.1.1. Texture Object API
A texture object is created using cudaCreateTextureObject() from a resource
description of type struct cudaResourceDesc, which specifies the texture, and from a
texture description defined as such:
struct cudaTextureDesc
{
enum cudaTextureAddressMode
enum cudaTextureFilterMode
enum cudaTextureReadMode
int
int
unsigned int
enum cudaTextureFilterMode
float
float
float
};
‣
‣
‣
‣
‣
addressMode[3];
filterMode;
readMode;
sRGB;
normalizedCoords;
maxAnisotropy;
mipmapFilterMode;
mipmapLevelBias;
minMipmapLevelClamp;
maxMipmapLevelClamp;
addressMode specifies the addressing mode;
filterMode specifies the filter mode;
readMode specifies the read mode;
normalizedCoords specifies whether texture coordinates are normalized or not;
See reference manual for sRGB, maxAnisotropy, mipmapFilterMode,
mipmapLevelBias, minMipmapLevelClamp, and maxMipmapLevelClamp.
The following code sample applies some simple transformation kernel to a texture.
// Simple transformation kernel
__global__ void transformKernel(float* output,
cudaTextureObject_t texObj,
int width, int height,
float theta)
{
// Calculate normalized texture coordinates
unsigned int x = blockIdx.x * blockDim.x + threadIdx.x;
unsigned int y = blockIdx.y * blockDim.y + threadIdx.y;
float u = x / (float)width;
float v = y / (float)height;
// Transform coordinates
u -= 0.5f;
v -= 0.5f;
float tu = u * cosf(theta) - v * sinf(theta) + 0.5f;
float tv = v * cosf(theta) + u * sinf(theta) + 0.5f;
}
// Read from texture and write to global memory
output[y * width + x] = tex2D(texObj, tu, tv);
www.nvidia.com
CUDA C Programming Guide
PG-02829-001_v9.1 | 45
Programming Interface
// Host code
int main()
{
// Allocate CUDA array in device memory
cudaChannelFormatDesc channelDesc =
cudaCreateChannelDesc(32, 0, 0, 0,
cudaChannelFormatKindFloat);
cudaArray* cuArray;
cudaMallocArray(&cuArray, &channelDesc, width, height);
// Copy to device memory some data located at address h_data
// in host memory
cudaMemcpyToArray(cuArray, 0, 0, h_data, size,
cudaMemcpyHostToDevice);
// Specify texture
struct cudaResourceDesc resDesc;
memset(&resDesc, 0, sizeof(resDesc));
resDesc.resType = cudaResourceTypeArray;
resDesc.res.array.array = cuArray;
// Specify texture object parameters
struct cudaTextureDesc texDesc;
memset(&texDesc, 0, sizeof(texDesc));
texDesc.addressMode[0]
= cudaAddressModeWrap;
texDesc.addressMode[1]
= cudaAddressModeWrap;
texDesc.filterMode
= cudaFilterModeLinear;
texDesc.readMode
= cudaReadModeElementType;
texDesc.normalizedCoords = 1;
// Create texture object
cudaTextureObject_t texObj = 0;
cudaCreateTextureObject(&texObj, &resDesc, &texDesc, NULL);
// Allocate result of transformation in device memory
float* output;
cudaMalloc(&output, width * height * sizeof(float));
// Invoke kernel
dim3 dimBlock(16, 16);
dim3 dimGrid((width + dimBlock.x - 1) / dimBlock.x,
(height + dimBlock.y - 1) / dimBlock.y);
transformKernel<<>>(output,
texObj, width, height,
angle);
// Destroy texture object
cudaDestroyTextureObject(texObj);
// Free device memory
cudaFreeArray(cuArray);
cudaFree(output);
}
return 0;
3.2.11.1.2. Texture Reference API
Some of the attributes of a texture reference are immutable and must be known at
compile time; they are specified when declaring the texture reference. A texture
reference is declared at file scope as a variable of type texture:
texture texRef;
where:
www.nvidia.com
CUDA C Programming Guide
PG-02829-001_v9.1 | 46
Programming Interface
‣
‣
‣
DataType specifies the type of the texel;
Type specifies the type of the texture reference and is equal to
cudaTextureType1D, cudaTextureType2D, or cudaTextureType3D, for a
one-dimensional, two-dimensional, or three-dimensional texture, respectively,
or cudaTextureType1DLayered or cudaTextureType2DLayered for a onedimensional or two-dimensional layered texture respectively; Type is an optional
argument which defaults to cudaTextureType1D;
ReadMode specifies the read mode; it is an optional argument which defaults to
cudaReadModeElementType.
A texture reference can only be declared as a static global variable and cannot be passed
as an argument to a function.
The other attributes of a texture reference are mutable and can be changed at runtime
through the host runtime. As explained in the reference manual, the runtime API
has a low-level C-style interface and a high-level C++-style interface. The texture
type is defined in the high-level API as a structure publicly derived from the
textureReference type defined in the low-level API as such:
struct textureReference {
int
enum cudaTextureFilterMode
enum cudaTextureAddressMode
struct cudaChannelFormatDesc
int
unsigned int
enum cudaTextureFilterMode
float
float
float
}
‣
‣
‣
‣
normalized;
filterMode;
addressMode[3];
channelDesc;
sRGB;
maxAnisotropy;
mipmapFilterMode;
mipmapLevelBias;
minMipmapLevelClamp;
maxMipmapLevelClamp;
normalized specifies whether texture coordinates are normalized or not;
filterMode specifies the filtering mode;
addressMode specifies the addressing mode;
channelDesc describes the format of the texel; it must match the DataType
argument of the texture reference declaration; channelDesc is of the following
type:
struct cudaChannelFormatDesc {
int x, y, z, w;
enum cudaChannelFormatKind f;
};
where x, y, z, and w are equal to the number of bits of each component of the
returned value and f is:
‣
‣
cudaChannelFormatKindSigned if these components are of signed integer
type,
‣ cudaChannelFormatKindUnsigned if they are of unsigned integer type,
‣ cudaChannelFormatKindFloat if they are of floating point type.
See reference manual for sRGB, maxAnisotropy, mipmapFilterMode,
mipmapLevelBias, minMipmapLevelClamp, and maxMipmapLevelClamp.
normalized, addressMode, and filterMode may be directly modified in host code.
www.nvidia.com
CUDA C Programming Guide
PG-02829-001_v9.1 | 47
Programming Interface
Before a kernel can use a texture reference to read from texture memory, the
texture reference must be bound to a texture using cudaBindTexture() or
cudaBindTexture2D() for linear memory, or cudaBindTextureToArray() for CUDA
arrays. cudaUnbindTexture() is used to unbind a texture reference. Once a texture
reference has been unbound, it can be safely rebound to another array, even if kernels
that use the previously bound texture have not completed. It is recommended to allocate
two-dimensional textures in linear memory using cudaMallocPitch() and use the
pitch returned by cudaMallocPitch() as input parameter to cudaBindTexture2D().
The following code samples bind a 2D texture reference to linear memory pointed to by
devPtr:
‣
‣
Using the low-level API:
texture texRef;
textureReference* texRefPtr;
cudaGetTextureReference(&texRefPtr, &texRef);
cudaChannelFormatDesc channelDesc =
cudaCreateChannelDesc();
size_t offset;
cudaBindTexture2D(&offset, texRefPtr, devPtr, &channelDesc,
width, height, pitch);
Using the high-level API:
texture texRef;
cudaChannelFormatDesc channelDesc =
cudaCreateChannelDesc();
size_t offset;
cudaBindTexture2D(&offset, texRef, devPtr, channelDesc,
width, height, pitch);
The following code samples bind a 2D texture reference to a CUDA array cuArray:
‣
‣
Using the low-level API:
texture texRef;
textureReference* texRefPtr;
cudaGetTextureReference(&texRefPtr, &texRef);
cudaChannelFormatDesc channelDesc;
cudaGetChannelDesc(&channelDesc, cuArray);
cudaBindTextureToArray(texRef, cuArray, &channelDesc);
Using the high-level API:
texture texRef;
cudaBindTextureToArray(texRef, cuArray);
The format specified when binding a texture to a texture reference must match the
parameters specified when declaring the texture reference; otherwise, the results of
texture fetches are undefined.
There is a limit to the number of textures that can be bound to a kernel as specified in
Table 14.
www.nvidia.com
CUDA C Programming Guide
PG-02829-001_v9.1 | 48
Programming Interface
The following code sample applies some simple transformation kernel to a texture.
// 2D float texture
texture texRef;
// Simple transformation kernel
__global__ void transformKernel(float* output,
int width, int height,
float theta)
{
// Calculate normalized texture coordinates
unsigned int x = blockIdx.x * blockDim.x + threadIdx.x;
unsigned int y = blockIdx.y * blockDim.y + threadIdx.y;
float u = x / (float)width;
float v = y / (float)height;
// Transform coordinates
u -= 0.5f;
v -= 0.5f;
float tu = u * cosf(theta) - v * sinf(theta) + 0.5f;
float tv = v * cosf(theta) + u * sinf(theta) + 0.5f;
}
// Read from texture and write to global memory
output[y * width + x] = tex2D(texRef, tu, tv);
// Host code
int main()
{
// Allocate CUDA array in device memory
cudaChannelFormatDesc channelDesc =
cudaCreateChannelDesc(32, 0, 0, 0,
cudaChannelFormatKindFloat);
cudaArray* cuArray;
cudaMallocArray(&cuArray, &channelDesc, width, height);
// Copy to device memory some data located at address h_data
// in host memory
cudaMemcpyToArray(cuArray, 0, 0, h_data, size,
cudaMemcpyHostToDevice);
// Set texture reference parameters
texRef.addressMode[0] = cudaAddressModeWrap;
texRef.addressMode[1] = cudaAddressModeWrap;
texRef.filterMode
= cudaFilterModeLinear;
texRef.normalized
= true;
// Bind the array to the texture reference
cudaBindTextureToArray(texRef, cuArray, channelDesc);
// Allocate result of transformation in device memory
float* output;
cudaMalloc(&output, width * height * sizeof(float));
// Invoke kernel
dim3 dimBlock(16, 16);
dim3 dimGrid((width + dimBlock.x - 1) / dimBlock.x,
(height + dimBlock.y - 1) / dimBlock.y);
transformKernel<<>>(output, width, height,
angle);
// Free device memory
cudaFreeArray(cuArray);
cudaFree(output);
}
return 0;
www.nvidia.com
CUDA C Programming Guide
PG-02829-001_v9.1 | 49
Programming Interface
3.2.11.1.3. 16-Bit Floating-Point Textures
The 16-bit floating-point or half format supported by CUDA arrays is the same as the
IEEE 754-2008 binary2 format.
CUDA C does not support a matching data type, but provides intrinsic functions to
convert to and from the 32-bit floating-point format via the unsigned short type:
__float2half_rn(float) and __half2float(unsigned short). These functions
are only supported in device code. Equivalent functions for the host code can be found
in the OpenEXR library, for example.
16-bit floating-point components are promoted to 32 bit float during texture fetching
before any filtering is performed.
A channel description for the 16-bit floating-point format can be created by calling one
of the cudaCreateChannelDescHalf*() functions.
3.2.11.1.4. Layered Textures
A one-dimensional or two-dimensional layered texture (also known as texture array in
Direct3D and array texture in OpenGL) is a texture made up of a sequence of layers, all of
which are regular textures of same dimensionality, size, and data type.
A one-dimensional layered texture is addressed using an integer index and a floatingpoint texture coordinate; the index denotes a layer within the sequence and the
coordinate addresses a texel within that layer. A two-dimensional layered texture is
addressed using an integer index and two floating-point texture coordinates; the index
denotes a layer within the sequence and the coordinates address a texel within that layer.
A layered texture can only be a CUDA array by calling cudaMalloc3DArray() with the
cudaArrayLayered flag (and a height of zero for one-dimensional layered texture).
Layered textures are fetched using the device functions described in tex1DLayered(),
tex1DLayered(), tex2DLayered(), and tex2DLayered(). Texture filtering (see Texture
Fetching) is done only within a layer, not across layers.
Layered textures are only supported on devices of compute capability 2.0 and higher.
3.2.11.1.5. Cubemap Textures
A cubemap texture is a special type of two-dimensional layered texture that has six layers
representing the faces of a cube:
‣
‣
The width of a layer is equal to its height.
The cubemap is addressed using three texture coordinates x, y, and z that are
interpreted as a direction vector emanating from the center of the cube and pointing
to one face of the cube and a texel within the layer corresponding to that face. More
specifically, the face is selected by the coordinate with largest magnitude m and the
corresponding layer is addressed using coordinates (s/m+1)/2 and (t/m+1)/2 where s
and t are defined in Table 1.
www.nvidia.com
CUDA C Programming Guide
PG-02829-001_v9.1 | 50
Programming Interface
Table 1 Cubemap Fetch
|x| > |y| and |x| > |z|
|y| > |x| and |y| > |z|
|z| > |x| and |z| > |y|
face
m
s
t
x>0
0
x
-z
-y
x<0
1
-x
z
-y
y>0
2
y
x
z
y<0
3
-y
x
-z
z>0
4
z
x
-y
z<0
5
-z
-x
-y
A layered texture can only be a CUDA array by calling cudaMalloc3DArray() with the
cudaArrayCubemap flag.
Cubemap textures are fetched using the device function described in texCubemap() and
texCubemap().
Cubemap textures are only supported on devices of compute capability 2.0 and higher.
3.2.11.1.6. Cubemap Layered Textures
A cubemap layered texture is a layered texture whose layers are cubemaps of same
dimension.
A cubemap layered texture is addressed using an integer index and three floatingpoint texture coordinates; the index denotes a cubemap within the sequence and the
coordinates address a texel within that cubemap.
A layered texture can only be a CUDA array by calling cudaMalloc3DArray() with the
cudaArrayLayered and cudaArrayCubemap flags.
Cubemap layered textures are fetched using the device function described in
texCubemapLayered() and texCubemapLayered(). Texture filtering (see Texture
Fetching) is done only within a layer, not across layers.
Cubemap layered textures are only supported on devices of compute capability 2.0 and
higher.
3.2.11.1.7. Texture Gather
Texture gather is a special texture fetch that is available for two-dimensional textures
only. It is performed by the tex2Dgather() function, which has the same parameters
as tex2D(), plus an additional comp parameter equal to 0, 1, 2, or 3 (see tex2Dgather()
and tex2Dgather()). It returns four 32-bit numbers that correspond to the value of the
component comp of each of the four texels that would have been used for bilinear
filtering during a regular texture fetch. For example, if these texels are of values
(253, 20, 31, 255), (250, 25, 29, 254), (249, 16, 37, 253), (251, 22, 30, 250), and comp is 2,
tex2Dgather() returns (31, 29, 37, 30).
Note that texture coordinates are computed with only 8 bits of fractional precision.
tex2Dgather() may therefore return unexpected results for cases where tex2D()
would use 1.0 for one of its weights (α or β, see Linear Filtering). For example, with an
www.nvidia.com
CUDA C Programming Guide
PG-02829-001_v9.1 | 51
Programming Interface
x texture coordinate of 2.49805: xB=x-0.5=1.99805, however the fractional part of xB is
stored in an 8-bit fixed-point format. Since 0.99805 is closer to 256.f/256.f than it is to
255.f/256.f, xB has the value 2. A tex2Dgather() in this case would therefore return
indices 2 and 3 in x, instead of indices 1 and 2.
Texture gather is only supported for CUDA arrays created with the
cudaArrayTextureGather flag and of width and height less than the maximum
specified in Table 14 for texture gather, which is smaller than for regular texture fetch.
Texture gather is only supported on devices of compute capability 2.0 and higher.
3.2.11.2. Surface Memory
For devices of compute capability 2.0 and higher, a CUDA array (described in Cubemap
Surfaces), created with the cudaArraySurfaceLoadStore flag, can be read and written
via a surface object or surface reference using the functions described in Surface Functions.
Table 14 lists the maximum surface width, height, and depth depending on the compute
capability of the device.
3.2.11.2.1. Surface Object API
A surface object is created using cudaCreateSurfaceObject() from a resource
description of type struct cudaResourceDesc.
www.nvidia.com
CUDA C Programming Guide
PG-02829-001_v9.1 | 52
Programming Interface
The following code sample applies some simple transformation kernel to a texture.
// Simple copy kernel
__global__ void copyKernel(cudaSurfaceObject_t inputSurfObj,
cudaSurfaceObject_t outputSurfObj,
int width, int height)
{
// Calculate surface coordinates
unsigned int x = blockIdx.x * blockDim.x + threadIdx.x;
unsigned int y = blockIdx.y * blockDim.y + threadIdx.y;
if (x < width && y < height) {
uchar4 data;
// Read from input surface
surf2Dread(&data, inputSurfObj, x * 4, y);
// Write to output surface
surf2Dwrite(data, outputSurfObj, x * 4, y);
}
}
// Host code
int main()
{
// Allocate CUDA arrays in device memory
cudaChannelFormatDesc channelDesc =
cudaCreateChannelDesc(8, 8, 8, 8,
cudaChannelFormatKindUnsigned);
cudaArray* cuInputArray;
cudaMallocArray(&cuInputArray, &channelDesc, width, height,
cudaArraySurfaceLoadStore);
cudaArray* cuOutputArray;
cudaMallocArray(&cuOutputArray, &channelDesc, width, height,
cudaArraySurfaceLoadStore);
// Copy to device memory some data located at address h_data
// in host memory
cudaMemcpyToArray(cuInputArray, 0, 0, h_data, size,
cudaMemcpyHostToDevice);
// Specify surface
struct cudaResourceDesc resDesc;
memset(&resDesc, 0, sizeof(resDesc));
resDesc.resType = cudaResourceTypeArray;
// Create the surface objects
resDesc.res.array.array = cuInputArray;
cudaSurfaceObject_t inputSurfObj = 0;
cudaCreateSurfaceObject(&inputSurfObj, &resDesc);
resDesc.res.array.array = cuOutputArray;
cudaSurfaceObject_t outputSurfObj = 0;
cudaCreateSurfaceObject(&outputSurfObj, &resDesc);
// Invoke kernel
dim3 dimBlock(16, 16);
dim3 dimGrid((width + dimBlock.x - 1) / dimBlock.x,
(height + dimBlock.y - 1) / dimBlock.y);
copyKernel<<>>(inputSurfObj,
outputSurfObj,
width, height);
// Destroy surface objects
cudaDestroySurfaceObject(inputSurfObj);
cudaDestroySurfaceObject(outputSurfObj);
// Free device memory
cudaFreeArray(cuInputArray);
cudaFreeArray(cuOutputArray);
}
return 0;
www.nvidia.com
CUDA C Programming Guide
PG-02829-001_v9.1 | 53
Programming Interface
3.2.11.2.2. Surface Reference API
A surface reference is declared at file scope as a variable of type surface:
surface surfRef;
where Type specifies the type of the surface reference and is equal to
cudaSurfaceType1D, cudaSurfaceType2D, cudaSurfaceType3D,
cudaSurfaceTypeCubemap, cudaSurfaceType1DLayered,
cudaSurfaceType2DLayered, or cudaSurfaceTypeCubemapLayered; Type is an
optional argument which defaults to cudaSurfaceType1D. A surface reference can only
be declared as a static global variable and cannot be passed as an argument to a function.
Before a kernel can use a surface reference to access a CUDA array, the surface reference
must be bound to the CUDA array using cudaBindSurfaceToArray().
The following code samples bind a surface reference to a CUDA array cuArray:
‣
‣
Using the low-level API:
surface surfRef;
surfaceReference* surfRefPtr;
cudaGetSurfaceReference(&surfRefPtr, "surfRef");
cudaChannelFormatDesc channelDesc;
cudaGetChannelDesc(&channelDesc, cuArray);
cudaBindSurfaceToArray(surfRef, cuArray, &channelDesc);
Using the high-level API:
surface surfRef;
cudaBindSurfaceToArray(surfRef, cuArray);
A CUDA array must be read and written using surface functions of matching
dimensionality and type and via a surface reference of matching dimensionality;
otherwise, the results of reading and writing the CUDA array are undefined.
Unlike texture memory, surface memory uses byte addressing. This means that
the x-coordinate used to access a texture element via texture functions needs to be
multiplied by the byte size of the element to access the same element via a surface
function. For example, the element at texture coordinate x of a one-dimensional
floating-point CUDA array bound to a texture reference texRef and a surface reference
surfRef is read using tex1d(texRef, x) via texRef, but surf1Dread(surfRef,
4*x) via surfRef. Similarly, the element at texture coordinate x and y of a twodimensional floating-point CUDA array bound to a texture reference texRef and a
surface reference surfRef is accessed using tex2d(texRef, x, y) via texRef, but
surf2Dread(surfRef, 4*x, y) via surfRef (the byte offset of the y-coordinate is
internally calculated from the underlying line pitch of the CUDA array).
www.nvidia.com
CUDA C Programming Guide
PG-02829-001_v9.1 | 54
Programming Interface
The following code sample applies some simple transformation kernel to a texture.
// 2D surfaces
surface inputSurfRef;
surface outputSurfRef;
// Simple copy kernel
__global__ void copyKernel(int width, int height)
{
// Calculate surface coordinates
unsigned int x = blockIdx.x * blockDim.x + threadIdx.x;
unsigned int y = blockIdx.y * blockDim.y + threadIdx.y;
if (x < width && y < height) {
uchar4 data;
// Read from input surface
surf2Dread(&data, inputSurfRef, x * 4, y);
// Write to output surface
surf2Dwrite(data, outputSurfRef, x * 4, y);
}
}
// Host code
int main()
{
// Allocate CUDA arrays in device memory
cudaChannelFormatDesc channelDesc =
cudaCreateChannelDesc(8, 8, 8, 8,
cudaChannelFormatKindUnsigned);
cudaArray* cuInputArray;
cudaMallocArray(&cuInputArray, &channelDesc, width, height,
cudaArraySurfaceLoadStore);
cudaArray* cuOutputArray;
cudaMallocArray(&cuOutputArray, &channelDesc, width, height,
cudaArraySurfaceLoadStore);
// Copy to device memory some data located at address h_data
// in host memory
cudaMemcpyToArray(cuInputArray, 0, 0, h_data, size,
cudaMemcpyHostToDevice);
// Bind the arrays to the surface references
cudaBindSurfaceToArray(inputSurfRef, cuInputArray);
cudaBindSurfaceToArray(outputSurfRef, cuOutputArray);
// Invoke kernel
dim3 dimBlock(16, 16);
dim3 dimGrid((width + dimBlock.x - 1) / dimBlock.x,
(height + dimBlock.y - 1) / dimBlock.y);
copyKernel<<>>(width, height);
// Free device memory
cudaFreeArray(cuInputArray);
cudaFreeArray(cuOutputArray);
}
return 0;
3.2.11.2.3. Cubemap Surfaces
Cubemap surfaces are accessed usingsurfCubemapread() and surfCubemapwrite()
(surfCubemapread and surfCubemapwrite) as a two-dimensional layered surface,
i.e., using an integer index denoting a face and two floating-point texture coordinates
addressing a texel within the layer corresponding to this face. Faces are ordered as
indicated in Table 1.
www.nvidia.com
CUDA C Programming Guide
PG-02829-001_v9.1 | 55
Programming Interface
3.2.11.2.4. Cubemap Layered Surfaces
Cubemap layered surfaces are accessed using surfCubemapLayeredread()
and surfCubemapLayeredwrite() (surfCubemapLayeredread() and
surfCubemapLayeredwrite()) as a two-dimensional layered surface, i.e., using an integer
index denoting a face of one of the cubemaps and two floating-point texture coordinates
addressing a texel within the layer corresponding to this face. Faces are ordered as
indicated in Table 1, so index ((2 * 6) + 3), for example, accesses the fourth face of the
third cubemap.
3.2.11.3. CUDA Arrays
CUDA arrays are opaque memory layouts optimized for texture fetching. They are one
dimensional, two dimensional, or three-dimensional and composed of elements, each of
which has 1, 2 or 4 components that may be signed or unsigned 8-, 16-, or 32-bit integers,
16-bit floats, or 32-bit floats. CUDA arrays are only accessible by kernels through texture
fetching as described in Texture Memory or surface reading and writing as described in
Surface Memory.
3.2.11.4. Read/Write Coherency
The texture and surface memory is cached (see Device Memory Accesses) and within
the same kernel call, the cache is not kept coherent with respect to global memory
writes and surface memory writes, so any texture fetch or surface read to an address
that has been written to via a global write or a surface write in the same kernel call
returns undefined data. In other words, a thread can safely read some texture or surface
memory location only if this memory location has been updated by a previous kernel
call or memory copy, but not if it has been previously updated by the same thread or
another thread from the same kernel call.
3.2.12. Graphics Interoperability
Some resources from OpenGL and Direct3D may be mapped into the address space of
CUDA, either to enable CUDA to read data written by OpenGL or Direct3D, or to enable
CUDA to write data for consumption by OpenGL or Direct3D.
A resource must be registered to CUDA before it can be mapped using the
functions mentioned in OpenGL Interoperability and Direct3D Interoperability.
These functions return a pointer to a CUDA graphics resource of type struct
cudaGraphicsResource. Registering a resource is potentially high-overhead and
therefore typically called only once per resource. A CUDA graphics resource is
unregistered using cudaGraphicsUnregisterResource(). Each CUDA context which
intends to use the resource is required to register it separately.
Once a resource is registered to CUDA, it can be mapped and unmapped
as many times as necessary using cudaGraphicsMapResources() and
cudaGraphicsUnmapResources(). cudaGraphicsResourceSetMapFlags() can be
called to specify usage hints (write-only, read-only) that the CUDA driver can use to
optimize resource management.
www.nvidia.com
CUDA C Programming Guide
PG-02829-001_v9.1 | 56
Programming Interface
A mapped resource can be read from or written to by kernels using the device memory
address returned by cudaGraphicsResourceGetMappedPointer() for buffers and
cudaGraphicsSubResourceGetMappedArray() for CUDA arrays.
Accessing a resource through OpenGL, Direct3D, or another CUDA context while
it is mapped produces undefined results. OpenGL Interoperability and Direct3D
Interoperability give specifics for each graphics API and some code samples. SLI
Interoperability gives specifics for when the system is in SLI mode.
3.2.12.1. OpenGL Interoperability
The OpenGL resources that may be mapped into the address space of CUDA are
OpenGL buffer, texture, and renderbuffer objects.
A buffer object is registered using cudaGraphicsGLRegisterBuffer(). In CUDA,
it appears as a device pointer and can therefore be read and written by kernels or via
cudaMemcpy() calls.
A texture or renderbuffer object is registered using
cudaGraphicsGLRegisterImage(). In CUDA, it appears as a CUDA array. Kernels
can read from the array by binding it to a texture or surface reference. They can also
write to it via the surface write functions if the resource has been registered with
the cudaGraphicsRegisterFlagsSurfaceLoadStore flag. The array can also be
read and written via cudaMemcpy2D() calls. cudaGraphicsGLRegisterImage()
supports all texture formats with 1, 2, or 4 components and an internal type of float
(e.g., GL_RGBA_FLOAT32), normalized integer (e.g., GL_RGBA8, GL_INTENSITY16), and
unnormalized integer (e.g., GL_RGBA8UI) (please note that since unnormalized integer
formats require OpenGL 3.0, they can only be written by shaders, not the fixed function
pipeline).
The OpenGL context whose resources are being shared has to be current to the host
thread making any OpenGL interoperability API calls.
Please note: When an OpenGL texture is made bindless (say for example by requesting
an image or texture handle using the glGetTextureHandle*/glGetImageHandle* APIs)
it cannot be registered with CUDA. The application needs to register the texture for
interop before requesting an image or texture handle.
www.nvidia.com
CUDA C Programming Guide
PG-02829-001_v9.1 | 57
Programming Interface
The following code sample uses a kernel to dynamically modify a 2D width x height
grid of vertices stored in a vertex buffer object:
GLuint positionsVBO;
struct cudaGraphicsResource* positionsVBO_CUDA;
int main()
{
// Initialize OpenGL and GLUT for device 0
// and make the OpenGL context current
...
glutDisplayFunc(display);
// Explicitly set device 0
cudaSetDevice(0);
// Create buffer object and register it with CUDA
glGenBuffers(1, &positionsVBO);
glBindBuffer(GL_ARRAY_BUFFER, positionsVBO);
unsigned int size = width * height * 4 * sizeof(float);
glBufferData(GL_ARRAY_BUFFER, size, 0, GL_DYNAMIC_DRAW);
glBindBuffer(GL_ARRAY_BUFFER, 0);
cudaGraphicsGLRegisterBuffer(&positionsVBO_CUDA,
positionsVBO,
cudaGraphicsMapFlagsWriteDiscard);
// Launch rendering loop
glutMainLoop();
}
...
void display()
{
// Map buffer object for writing from CUDA
float4* positions;
cudaGraphicsMapResources(1, &positionsVBO_CUDA, 0);
size_t num_bytes;
cudaGraphicsResourceGetMappedPointer((void**)&positions,
&num_bytes,
positionsVBO_CUDA));
// Execute kernel
dim3 dimBlock(16, 16, 1);
dim3 dimGrid(width / dimBlock.x, height / dimBlock.y, 1);
createVertices<<>>(positions, time,
width, height);
// Unmap buffer object
cudaGraphicsUnmapResources(1, &positionsVBO_CUDA, 0);
// Render from buffer object
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glBindBuffer(GL_ARRAY_BUFFER, positionsVBO);
glVertexPointer(4, GL_FLOAT, 0, 0);
glEnableClientState(GL_VERTEX_ARRAY);
glDrawArrays(GL_POINTS, 0, width * height);
glDisableClientState(GL_VERTEX_ARRAY);
}
// Swap buffers
glutSwapBuffers();
glutPostRedisplay();
www.nvidia.com
CUDA C Programming Guide
PG-02829-001_v9.1 | 58
Programming Interface
void deleteVBO()
{
cudaGraphicsUnregisterResource(positionsVBO_CUDA);
glDeleteBuffers(1, &positionsVBO);
}
__global__ void createVertices(float4* positions, float time,
unsigned int width, unsigned int height)
{
unsigned int x = blockIdx.x * blockDim.x + threadIdx.x;
unsigned int y = blockIdx.y * blockDim.y + threadIdx.y;
// Calculate uv coordinates
float u = x / (float)width;
float v = y / (float)height;
u = u * 2.0f - 1.0f;
v = v * 2.0f - 1.0f;
// calculate simple sine wave pattern
float freq = 4.0f;
float w = sinf(u * freq + time)
* cosf(v * freq + time) * 0.5f;
}
// Write positions
positions[y * width + x] = make_float4(u, w, v, 1.0f);
On Windows and for Quadro GPUs, cudaWGLGetDevice() can be used to retrieve the
CUDA device associated to the handle returned by wglEnumGpusNV(). Quadro GPUs
offer higher performance OpenGL interoperability than GeForce and Tesla GPUs in a
multi-GPU configuration where OpenGL rendering is performed on the Quadro GPU
and CUDA computations are performed on other GPUs in the system.
3.2.12.2. Direct3D Interoperability
Direct3D interoperability is supported for Direct3D 9Ex, Direct3D 10, and Direct3D 11.
A CUDA context may interoperate only with Direct3D devices that
fulfill the following criteria: Direct3D 9Ex devices must be created with
DeviceType set to D3DDEVTYPE_HAL and BehaviorFlags with the
D3DCREATE_HARDWARE_VERTEXPROCESSING flag; Direct3D 10 and Direct3D 11 devices
must be created with DriverType set to D3D_DRIVER_TYPE_HARDWARE.
The Direct3D resources that may be mapped into the address space of
CUDA are Direct3D buffers, textures, and surfaces. These resources
are registered using cudaGraphicsD3D9RegisterResource(),
cudaGraphicsD3D10RegisterResource(), and
cudaGraphicsD3D11RegisterResource().
The following code sample uses a kernel to dynamically modify a 2D width x height
grid of vertices stored in a vertex buffer object.
www.nvidia.com
CUDA C Programming Guide
PG-02829-001_v9.1 | 59
Programming Interface
3.2.12.2.1. Direct3D 9 Version
IDirect3D9* D3D;
IDirect3DDevice9* device;
struct CUSTOMVERTEX {
FLOAT x, y, z;
DWORD color;
};
IDirect3DVertexBuffer9* positionsVB;
struct cudaGraphicsResource* positionsVB_CUDA;
int main()
{
int dev;
// Initialize Direct3D
D3D = Direct3DCreate9Ex(D3D_SDK_VERSION);
// Get a CUDA-enabled adapter
unsigned int adapter = 0;
for (; adapter < g_pD3D->GetAdapterCount(); adapter++) {
D3DADAPTER_IDENTIFIER9 adapterId;
g_pD3D->GetAdapterIdentifier(adapter, 0, &adapterId);
if (cudaD3D9GetDevice(&dev, adapterId.DeviceName)
== cudaSuccess)
break;
}
// Create device
...
D3D->CreateDeviceEx(adapter, D3DDEVTYPE_HAL, hWnd,
D3DCREATE_HARDWARE_VERTEXPROCESSING,
¶ms, NULL, &device);
// Use the same device
cudaSetDevice(dev);
// Create vertex buffer and register it with CUDA
unsigned int size = width * height * sizeof(CUSTOMVERTEX);
device->CreateVertexBuffer(size, 0, D3DFVF_CUSTOMVERTEX,
D3DPOOL_DEFAULT, &positionsVB, 0);
cudaGraphicsD3D9RegisterResource(&positionsVB_CUDA,
positionsVB,
cudaGraphicsRegisterFlagsNone);
cudaGraphicsResourceSetMapFlags(positionsVB_CUDA,
cudaGraphicsMapFlagsWriteDiscard);
}
// Launch rendering loop
while (...) {
...
Render();
...
}
...
www.nvidia.com
CUDA C Programming Guide
PG-02829-001_v9.1 | 60
Programming Interface
void Render()
{
// Map vertex buffer for writing from CUDA
float4* positions;
cudaGraphicsMapResources(1, &positionsVB_CUDA, 0);
size_t num_bytes;
cudaGraphicsResourceGetMappedPointer((void**)&positions,
&num_bytes,
positionsVB_CUDA));
// Execute kernel
dim3 dimBlock(16, 16, 1);
dim3 dimGrid(width / dimBlock.x, height / dimBlock.y, 1);
createVertices<<>>(positions, time,
width, height);
// Unmap vertex buffer
cudaGraphicsUnmapResources(1, &positionsVB_CUDA, 0);
}
// Draw and present
...
void releaseVB()
{
cudaGraphicsUnregisterResource(positionsVB_CUDA);
positionsVB->Release();
}
__global__ void createVertices(float4* positions, float time,
unsigned int width, unsigned int height)
{
unsigned int x = blockIdx.x * blockDim.x + threadIdx.x;
unsigned int y = blockIdx.y * blockDim.y + threadIdx.y;
// Calculate uv coordinates
float u = x / (float)width;
float v = y / (float)height;
u = u * 2.0f - 1.0f;
v = v * 2.0f - 1.0f;
// Calculate simple sine wave pattern
float freq = 4.0f;
float w = sinf(u * freq + time)
* cosf(v * freq + time) * 0.5f;
}
// Write positions
positions[y * width + x] =
make_float4(u, w, v, __int_as_float(0xff00ff00));
www.nvidia.com
CUDA C Programming Guide
PG-02829-001_v9.1 | 61
Programming Interface
3.2.12.2.2. Direct3D 10 Version
ID3D10Device* device;
struct CUSTOMVERTEX {
FLOAT x, y, z;
DWORD color;
};
ID3D10Buffer* positionsVB;
struct cudaGraphicsResource* positionsVB_CUDA;
int main()
{
int dev;
// Get a CUDA-enabled adapter
IDXGIFactory* factory;
CreateDXGIFactory(__uuidof(IDXGIFactory), (void**)&factory);
IDXGIAdapter* adapter = 0;
for (unsigned int i = 0; !adapter; ++i) {
if (FAILED(factory->EnumAdapters(i, &adapter))
break;
if (cudaD3D10GetDevice(&dev, adapter) == cudaSuccess)
break;
adapter->Release();
}
factory->Release();
// Create swap chain and device
...
D3D10CreateDeviceAndSwapChain(adapter,
D3D10_DRIVER_TYPE_HARDWARE, 0,
D3D10_CREATE_DEVICE_DEBUG,
D3D10_SDK_VERSION,
&swapChainDesc, &swapChain,
&device);
adapter->Release();
// Use the same device
cudaSetDevice(dev);
// Create vertex buffer and register it with CUDA
unsigned int size = width * height * sizeof(CUSTOMVERTEX);
D3D10_BUFFER_DESC bufferDesc;
bufferDesc.Usage
= D3D10_USAGE_DEFAULT;
bufferDesc.ByteWidth
= size;
bufferDesc.BindFlags
= D3D10_BIND_VERTEX_BUFFER;
bufferDesc.CPUAccessFlags = 0;
bufferDesc.MiscFlags
= 0;
device->CreateBuffer(&bufferDesc, 0, &positionsVB);
cudaGraphicsD3D10RegisterResource(&positionsVB_CUDA,
positionsVB,
cudaGraphicsRegisterFlagsNone);
cudaGraphicsResourceSetMapFlags(positionsVB_CUDA,
cudaGraphicsMapFlagsWriteDiscard);
}
// Launch rendering loop
while (...) {
...
Render();
...
}
...
www.nvidia.com
CUDA C Programming Guide
PG-02829-001_v9.1 | 62
Programming Interface
void Render()
{
// Map vertex buffer for writing from CUDA
float4* positions;
cudaGraphicsMapResources(1, &positionsVB_CUDA, 0);
size_t num_bytes;
cudaGraphicsResourceGetMappedPointer((void**)&positions,
&num_bytes,
positionsVB_CUDA));
// Execute kernel
dim3 dimBlock(16, 16, 1);
dim3 dimGrid(width / dimBlock.x, height / dimBlock.y, 1);
createVertices<<>>(positions, time,
width, height);
// Unmap vertex buffer
cudaGraphicsUnmapResources(1, &positionsVB_CUDA, 0);
}
// Draw and present
...
void releaseVB()
{
cudaGraphicsUnregisterResource(positionsVB_CUDA);
positionsVB->Release();
}
__global__ void createVertices(float4* positions, float time,
unsigned int width, unsigned int height)
{
unsigned int x = blockIdx.x * blockDim.x + threadIdx.x;
unsigned int y = blockIdx.y * blockDim.y + threadIdx.y;
// Calculate uv coordinates
float u = x / (float)width;
float v = y / (float)height;
u = u * 2.0f - 1.0f;
v = v * 2.0f - 1.0f;
// Calculate simple sine wave pattern
float freq = 4.0f;
float w = sinf(u * freq + time)
* cosf(v * freq + time) * 0.5f;
}
// Write positions
positions[y * width + x] =
make_float4(u, w, v, __int_as_float(0xff00ff00));
www.nvidia.com
CUDA C Programming Guide
PG-02829-001_v9.1 | 63
Programming Interface
3.2.12.2.3. Direct3D 11 Version
ID3D11Device* device;
struct CUSTOMVERTEX {
FLOAT x, y, z;
DWORD color;
};
ID3D11Buffer* positionsVB;
struct cudaGraphicsResource* positionsVB_CUDA;
int main()
{
int dev;
// Get a CUDA-enabled adapter
IDXGIFactory* factory;
CreateDXGIFactory(__uuidof(IDXGIFactory), (void**)&factory);
IDXGIAdapter* adapter = 0;
for (unsigned int i = 0; !adapter; ++i) {
if (FAILED(factory->EnumAdapters(i, &adapter))
break;
if (cudaD3D11GetDevice(&dev, adapter) == cudaSuccess)
break;
adapter->Release();
}
factory->Release();
// Create swap chain and device
...
sFnPtr_D3D11CreateDeviceAndSwapChain(adapter,
D3D11_DRIVER_TYPE_HARDWARE,
0,
D3D11_CREATE_DEVICE_DEBUG,
featureLevels, 3,
D3D11_SDK_VERSION,
&swapChainDesc, &swapChain,
&device,
&featureLevel,
&deviceContext);
adapter->Release();
// Use the same device
cudaSetDevice(dev);
// Create vertex buffer and register it with CUDA
unsigned int size = width * height * sizeof(CUSTOMVERTEX);
D3D11_BUFFER_DESC bufferDesc;
bufferDesc.Usage
= D3D11_USAGE_DEFAULT;
bufferDesc.ByteWidth
= size;
bufferDesc.BindFlags
= D3D11_BIND_VERTEX_BUFFER;
bufferDesc.CPUAccessFlags = 0;
bufferDesc.MiscFlags
= 0;
device->CreateBuffer(&bufferDesc, 0, &positionsVB);
cudaGraphicsD3D11RegisterResource(&positionsVB_CUDA,
positionsVB,
cudaGraphicsRegisterFlagsNone);
cudaGraphicsResourceSetMapFlags(positionsVB_CUDA,
cudaGraphicsMapFlagsWriteDiscard);
}
// Launch rendering loop
while (...) {
...
Render();
...
}
...
www.nvidia.com
CUDA C Programming Guide
PG-02829-001_v9.1 | 64
Programming Interface
void Render()
{
// Map vertex buffer for writing from CUDA
float4* positions;
cudaGraphicsMapResources(1, &positionsVB_CUDA, 0);
size_t num_bytes;
cudaGraphicsResourceGetMappedPointer((void**)&positions,
&num_bytes,
positionsVB_CUDA));
// Execute kernel
dim3 dimBlock(16, 16, 1);
dim3 dimGrid(width / dimBlock.x, height / dimBlock.y, 1);
createVertices<<>>(positions, time,
width, height);
// Unmap vertex buffer
cudaGraphicsUnmapResources(1, &positionsVB_CUDA, 0);
}
// Draw and present
...
void releaseVB()
{
cudaGraphicsUnregisterResource(positionsVB_CUDA);
positionsVB->Release();
}
{
__global__ void createVertices(float4* positions, float time,
unsigned int width, unsigned int height)
unsigned int x = blockIdx.x * blockDim.x + threadIdx.x;
unsigned int y = blockIdx.y * blockDim.y + threadIdx.y;
// Calculate uv coordinates
float u = x / (float)width;
float v = y / (float)height;
u = u * 2.0f - 1.0f;
v = v * 2.0f - 1.0f;
// Calculate simple sine wave pattern
float freq = 4.0f;
float w = sinf(u * freq + time)
* cosf(v * freq + time) * 0.5f;
}
// Write positions
positions[y * width + x] =
make_float4(u, w, v, __int_as_float(0xff00ff00));
3.2.12.3. SLI Interoperability
In a system with multiple GPUs, all CUDA-enabled GPUs are accessible via the CUDA
driver and runtime as separate devices. There are however special considerations as
described below when the system is in SLI mode.
First, an allocation in one CUDA device on one GPU will consume memory on other
GPUs that are part of the SLI configuration of the Direct3D or OpenGL device. Because
of this, allocations may fail earlier than otherwise expected.
Second, applications should create multiple CUDA contexts, one for each GPU in the SLI
configuration. While this is not a strict requirement, it avoids unnecessary data transfers
between devices. The application can use the cudaD3D[9|10|11]GetDevices() for
www.nvidia.com
CUDA C Programming Guide
PG-02829-001_v9.1 | 65
Programming Interface
Direct3D and cudaGLGetDevices() for OpenGL set of calls to identify the CUDA
device handle(s) for the device(s) that are performing the rendering in the current
and next frame. Given this information the application will typically choose the
appropriate device and map Direct3D or OpenGL resources to the CUDA device
returned by cudaD3D[9|10|11]GetDevices() or cudaGLGetDevices() when the
deviceList parameter is set to cudaD3D[9|10|11]DeviceListCurrentFrame or
cudaGLDeviceListCurrentFrame.
Please note that resource returned from cudaGraphicsD9D[9|10|
11]RegisterResource and cudaGraphicsGLRegister[Buffer|Image] must be
only used on device the registration happened. Therefore on SLI configurations when
data for different frames is computed on different CUDA devices it is necessary to
register the resources for each separatly.
See Direct3D Interoperability and OpenGL Interoperability for details on how the
CUDA runtime interoperate with Direct3D and OpenGL, respectively.
3.3. Versioning and Compatibility
There are two version numbers that developers should care about when developing a
CUDA application: The compute capability that describes the general specifications and
features of the compute device (see Compute Capability) and the version of the CUDA
driver API that describes the features supported by the driver API and runtime.
The version of the driver API is defined in the driver header file as CUDA_VERSION. It
allows developers to check whether their application requires a newer device driver
than the one currently installed. This is important, because the driver API is backward
compatible, meaning that applications, plug-ins, and libraries (including the C runtime)
compiled against a particular version of the driver API will continue to work on
subsequent device driver releases as illustrated in Figure 11. The driver API is not
forward compatible, which means that applications, plug-ins, and libraries (including the
C runtime) compiled against a particular version of the driver API will not work on
previous versions of the device driver.
It is important to note that there are limitations on the mixing and matching of versions
that is supported:
‣
‣
‣
Since only one version of the CUDA Driver can be installed at a time on a system,
the installed driver must be of the same or higher version than the maximum Driver
API version against which any application, plug-ins, or libraries that must run on
that system were built.
All plug-ins and libraries used by an application must use the same version of the
CUDA Runtime unless they statically link to the Runtime, in which case multiple
versions of the runtime can coexist in the same process space. Note that if nvcc is
used to link the application, the static version of the CUDA Runtime library will
be used by default, and all CUDA Toolkit libraries are statically linked against the
CUDA Runtime.
All plug-ins and libraries used by an application must use the same version of any
libraries that use the runtime (such as cuFFT, cuBLAS, ...) unless statically linking to
those libraries.
www.nvidia.com
CUDA C Programming Guide
PG-02829-001_v9.1 | 66
Programming Interface
Apps,
Libs &
Plug-ins
Apps,
Libs &
Plug-ins
Apps,
Libs &
Plug-ins
...
1.0
Driver
1.1
Driver
2.0
Driver
...
Compatible
Incompatible
Figure 11 The Driver API Is Backward but Not Forward Compatible
3.4. Compute Modes
On Tesla solutions running Windows Server 2008 and later or Linux, one can set
any device in a system in one of the three following modes using NVIDIA's System
Management Interface (nvidia-smi), which is a tool distributed as part of the driver:
‣
‣
‣
‣
Default compute mode: Multiple host threads can use the device (by calling
cudaSetDevice() on this device, when using the runtime API, or by making
current a context associated to the device, when using the driver API) at the same
time.
Exclusive-process compute mode: Only one CUDA context may be created on the
device across all processes in the system and that context may be current to as many
threads as desired within the process that created that context.
Exclusive-process-and-thread compute mode: Only one CUDA context may be created
on the device across all processes in the system and that context may only be current
to one thread at a time.
Prohibited compute mode: No CUDA context can be created on the device.
This means, in particular, that a host thread using the runtime API without explicitly
calling cudaSetDevice() might be associated with a device other than device 0 if
device 0 turns out to be in the exclusive-process mode and used by another process, or
in the exclusive-process-and-thread mode and used by another thread, or in prohibited
www.nvidia.com
CUDA C Programming Guide
PG-02829-001_v9.1 | 67
Programming Interface
mode. cudaSetValidDevices() can be used to set a device from a prioritized list of
devices.
Note also that, for devices featuring the Pascal architecture onwards (compute
capability with major revision number 6 and higher), there exists support for
Compute Preemption. This allows compute tasks to be preempted at instructionlevel granularity, rather than thread block granularity as in prior Maxwell and Kepler
GPU architecture, with the benefit that applications with long-running kernels
can be prevented from either monopolizing the system or timing out. However,
there will be context switch overheads associated with Compute Preemption,
which is automatically enabled on those devices for which support exists. The
individual attribute query function cudaDeviceGetAttribute() with the attribute
cudaDevAttrComputePreemptionSupported can be used to determine if the device
in use supports Compute Preemption. Users wishing to avoid context switch overheads
associated with different processes can ensure that only one process is active on the GPU
by selecting exclusive-process mode.
Applications may query the compute mode of a device by checking the computeMode
device property (see Device Enumeration).
3.5. Mode Switches
GPUs that have a display output dedicate some DRAM memory to the so-called primary
surface, which is used to refresh the display device whose output is viewed by the user.
When users initiate a mode switch of the display by changing the resolution or bit depth
of the display (using NVIDIA control panel or the Display control panel on Windows),
the amount of memory needed for the primary surface changes. For example, if the
user changes the display resolution from 1280x1024x32-bit to 1600x1200x32-bit, the
system must dedicate 7.68 MB to the primary surface rather than 5.24 MB. (Full-screen
graphics applications running with anti-aliasing enabled may require much more
display memory for the primary surface.) On Windows, other events that may initiate
display mode switches include launching a full-screen DirectX application, hitting Alt
+Tab to task switch away from a full-screen DirectX application, or hitting Ctrl+Alt+Del
to lock the computer.
If a mode switch increases the amount of memory needed for the primary surface, the
system may have to cannibalize memory allocations dedicated to CUDA applications.
Therefore, a mode switch results in any call to the CUDA runtime to fail and return an
invalid context error.
3.6. Tesla Compute Cluster Mode for Windows
Using NVIDIA's System Management Interface (nvidia-smi), the Windows device driver
can be put in TCC (Tesla Compute Cluster) mode for devices of the Tesla and Quadro
Series of compute capability 2.0 and higher.
This mode has the following primary benefits:
www.nvidia.com
CUDA C Programming Guide
PG-02829-001_v9.1 | 68
Programming Interface
‣
‣
‣
It makes it possible to use these GPUs in cluster nodes with non-NVIDIA integrated
graphics;
It makes these GPUs available via Remote Desktop, both directly and via cluster
management systems that rely on Remote Desktop;
It makes these GPUs available to applications running as a Windows service (i.e., in
Session 0).
However, the TCC mode removes support for any graphics functionality.
www.nvidia.com
CUDA C Programming Guide
PG-02829-001_v9.1 | 69
Chapter 4.
HARDWARE IMPLEMENTATION
The NVIDIA GPU architecture is built around a scalable array of multithreaded
Streaming Multiprocessors (SMs). When a CUDA program on the host CPU invokes a
kernel grid, the blocks of the grid are enumerated and distributed to multiprocessors
with available execution capacity. The threads of a thread block execute concurrently
on one multiprocessor, and multiple thread blocks can execute concurrently on one
multiprocessor. As thread blocks terminate, new blocks are launched on the vacated
multiprocessors.
A multiprocessor is designed to execute hundreds of threads concurrently. To manage
such a large amount of threads, it employs a unique architecture called SIMT (SingleInstruction, Multiple-Thread) that is described in SIMT Architecture. The instructions
are pipelined to leverage instruction-level parallelism within a single thread, as well as
thread-level parallelism extensively through simultaneous hardware multithreading
as detailed in Hardware Multithreading. Unlike CPU cores they are issued in order
however and there is no branch prediction and no speculative execution.
SIMT Architecture and Hardware Multithreading describe the architecture features of
the streaming multiprocessor that are common to all devices. Compute Capability 3.x,
Compute Capability 5.x, Compute Capability 6.x, and Compute Capability 7.x provide
the specifics for devices of compute capabilities 3.x, 5.x, 6.x, and 7.x respectively.
The NVIDIA GPU architecture uses a little-endian representation.
4.1. SIMT Architecture
The multiprocessor creates, manages, schedules, and executes threads in groups of 32
parallel threads called warps. Individual threads composing a warp start together at
the same program address, but they have their own instruction address counter and
register state and are therefore free to branch and execute independently. The term warp
originates from weaving, the first parallel thread technology. A half-warp is either the
first or second half of a warp. A quarter-warp is either the first, second, third, or fourth
quarter of a warp.
When a multiprocessor is given one or more thread blocks to execute, it partitions
them into warps and each warp gets scheduled by a warp scheduler for execution. The
www.nvidia.com
CUDA C Programming Guide
PG-02829-001_v9.1 | 70
Hardware Implementation
way a block is partitioned into warps is always the same; each warp contains threads
of consecutive, increasing thread IDs with the first warp containing thread 0. Thread
Hierarchy describes how thread IDs relate to thread indices in the block.
A warp executes one common instruction at a time, so full efficiency is realized when
all 32 threads of a warp agree on their execution path. If threads of a warp diverge via a
data-dependent conditional branch, the warp executes each branch path taken, disabling
threads that are not on that path. Branch divergence occurs only within a warp; different
warps execute independently regardless of whether they are executing common or
disjoint code paths.
The SIMT architecture is akin to SIMD (Single Instruction, Multiple Data) vector
organizations in that a single instruction controls multiple processing elements. A key
difference is that SIMD vector organizations expose the SIMD width to the software,
whereas SIMT instructions specify the execution and branching behavior of a single
thread. In contrast with SIMD vector machines, SIMT enables programmers to write
thread-level parallel code for independent, scalar threads, as well as data-parallel code
for coordinated threads. For the purposes of correctness, the programmer can essentially
ignore the SIMT behavior; however, substantial performance improvements can be
realized by taking care that the code seldom requires threads in a warp to diverge. In
practice, this is analogous to the role of cache lines in traditional code: Cache line size
can be safely ignored when designing for correctness but must be considered in the code
structure when designing for peak performance. Vector architectures, on the other hand,
require the software to coalesce loads into vectors and manage divergence manually.
Prior to Volta, warps used a single program counter shared amongst all 32 threads in the
warp together with an active mask specifying the active threads of the warp. As a result,
threads from the same warp in divergent regions or different states of execution cannot
signal each other or exchange data, and algorithms requiring fine-grained sharing of
data guarded by locks or mutexes can easily lead to deadlock, depending on which warp
the contending threads come from.
Starting with the Volta architecture, Independent Thread Scheduling allows full
concurrency between threads, regardless of warp. With Independent Thread Scheduling,
the GPU maintains execution state per thread, including a program counter and call
stack, and can yield execution at a per-thread granularity, either to make better use of
execution resources or to allow one thread to wait for data to be produced by another.
A schedule optimizer determines how to group active threads from the same warp
together into SIMT units. This retains the high throughput of SIMT execution as in prior
NVIDIA GPUs, but with much more flexibility: threads can now diverge and reconverge
at sub-warp granularity.
Independent Thread Scheduling can lead to a rather different set of threads participating
in the executed code than intended if the developer made assumptions about warpsynchronicity1 of previous hardware architectures. In particular, any warp-synchronous
code (such as synchronization-free, intra-warp reductions) should be revisited to ensure
compatibility with Volta and beyond. See Compute Capability 7.x for further details.
1
The term warp-synchronous refers to code that implicitly assumes threads in the same warp are synchronized at every
instruction.
www.nvidia.com
CUDA C Programming Guide
PG-02829-001_v9.1 | 71
Hardware Implementation
Notes
The threads of a warp that are participating in the current instruction are called the
active threads, whereas threads not on the current instruction are inactive (disabled).
Threads can be inactive for a variety of reasons including having exited earlier than
other threads of their warp, having taken a different branch path than the branch path
currently executed by the warp, or being the last threads of a block whose number of
threads is not a multiple of the warp size.
If a non-atomic instruction executed by a warp writes to the same location in global or
shared memory for more than one of the threads of the warp, the number of serialized
writes that occur to that location varies depending on the compute capability of the
device (see Compute Capability 3.x, Compute Capability 5.x, Compute Capability 6.x,
and Compute Capability 7.x), and which thread performs the final write is undefined.
If an atomic instruction executed by a warp reads, modifies, and writes to the same
location in global memory for more than one of the threads of the warp, each read/
modify/write to that location occurs and they are all serialized, but the order in which
they occur is undefined.
4.2. Hardware Multithreading
The execution context (program counters, registers, etc.) for each warp processed by a
multiprocessor is maintained on-chip during the entire lifetime of the warp. Therefore,
switching from one execution context to another has no cost, and at every instruction
issue time, a warp scheduler selects a warp that has threads ready to execute its next
instruction (the active threads of the warp) and issues the instruction to those threads.
In particular, each multiprocessor has a set of 32-bit registers that are partitioned among
the warps, and a parallel data cache or shared memory that is partitioned among the thread
blocks.
The number of blocks and warps that can reside and be processed together on the
multiprocessor for a given kernel depends on the amount of registers and shared
memory used by the kernel and the amount of registers and shared memory available
on the multiprocessor. There are also a maximum number of resident blocks and a
maximum number of resident warps per multiprocessor. These limits as well the amount
of registers and shared memory available on the multiprocessor are a function of the
compute capability of the device and are given in Appendix Compute Capabilities. If
there are not enough registers or shared memory available per multiprocessor to process
at least one block, the kernel will fail to launch.
The total number of warps in a block is as follows:
‣
‣
T is the number of threads per block,
Wsize is the warp size, which is equal to 32,
www.nvidia.com
CUDA C Programming Guide
PG-02829-001_v9.1 | 72
Hardware Implementation
‣
ceil(x, y) is equal to x rounded up to the nearest multiple of y.
The total number of registers and total amount of shared memory allocated for a block
are documented in the CUDA Occupancy Calculator provided in the CUDA Toolkit.
www.nvidia.com
CUDA C Programming Guide
PG-02829-001_v9.1 | 73
Chapter 5.
PERFORMANCE GUIDELINES
5.1. Overall Performance Optimization Strategies
Performance optimization revolves around three basic strategies:
‣
‣
‣
Maximize parallel execution to achieve maximum utilization;
Optimize memory usage to achieve maximum memory throughput;
Optimize instruction usage to achieve maximum instruction throughput.
Which strategies will yield the best performance gain for a particular portion of an
application depends on the performance limiters for that portion; optimizing instruction
usage of a kernel that is mostly limited by memory accesses will not yield any significant
performance gain, for example. Optimization efforts should therefore be constantly
directed by measuring and monitoring the performance limiters, for example using the
CUDA profiler. Also, comparing the floating-point operation throughput or memory
throughput - whichever makes more sense - of a particular kernel to the corresponding
peak theoretical throughput of the device indicates how much room for improvement
there is for the kernel.
5.2. Maximize Utilization
To maximize utilization the application should be structured in a way that it exposes
as much parallelism as possible and efficiently maps this parallelism to the various
components of the system to keep them busy most of the time.
5.2.1. Application Level
At a high level, the application should maximize parallel execution between the host, the
devices, and the bus connecting the host to the devices, by using asynchronous functions
calls and streams as described in Asynchronous Concurrent Execution. It should assign
to each processor the type of work it does best: serial workloads to the host; parallel
workloads to the devices.
www.nvidia.com
CUDA C Programming Guide
PG-02829-001_v9.1 | 74
Performance Guidelines
For the parallel workloads, at points in the algorithm where parallelism is broken
because some threads need to synchronize in order to share data with each other,
there are two cases: Either these threads belong to the same block, in which case they
should use __syncthreads() and share data through shared memory within the same
kernel invocation, or they belong to different blocks, in which case they must share
data through global memory using two separate kernel invocations, one for writing to
and one for reading from global memory. The second case is much less optimal since it
adds the overhead of extra kernel invocations and global memory traffic. Its occurrence
should therefore be minimized by mapping the algorithm to the CUDA programming
model in such a way that the computations that require inter-thread communication are
performed within a single thread block as much as possible.
5.2.2. Device Level
At a lower level, the application should maximize parallel execution between the
multiprocessors of a device.
Multiple kernels can execute concurrently on a device, so maximum utilization can
also be achieved by using streams to enable enough kernels to execute concurrently as
described in Asynchronous Concurrent Execution.
5.2.3. Multiprocessor Level
At an even lower level, the application should maximize parallel execution between the
various functional units within a multiprocessor.
As described in Hardware Multithreading, a GPU multiprocessor relies on threadlevel parallelism to maximize utilization of its functional units. Utilization is therefore
directly linked to the number of resident warps. At every instruction issue time, a
warp scheduler selects a warp that is ready to execute its next instruction, if any, and
issues the instruction to the active threads of the warp. The number of clock cycles it
takes for a warp to be ready to execute its next instruction is called the latency, and
full utilization is achieved when all warp schedulers always have some instruction to
issue for some warp at every clock cycle during that latency period, or in other words,
when latency is completely "hidden". The number of instructions required to hide a
latency of L clock cycles depends on the respective throughputs of these instructions
(see Arithmetic Instructions for the throughputs of various arithmetic instructions).
Assuming maximum throughput for all instructions, it is: 8L for devices of compute
capability 3.x since a multiprocessor issues a pair of instructions per warp over one clock
cycle for four warps at a time, as mentioned in Compute Capability 3.x.
For devices of compute capability 3.x, the eight instructions issued every cycle are four
pairs for four different warps, each pair being for the same warp.
The most common reason a warp is not ready to execute its next instruction is that the
instruction's input operands are not available yet.
If all input operands are registers, latency is caused by register dependencies, i.e., some
of the input operands are written by some previous instruction(s) whose execution has
not completed yet. In the case of a back-to-back register dependency (i.e., some input
operand is written by the previous instruction), the latency is equal to the execution
www.nvidia.com
CUDA C Programming Guide
PG-02829-001_v9.1 | 75
Performance Guidelines
time of the previous instruction and the warp schedulers must schedule instructions for
different warps during that time. Execution time varies depending on the instruction,
but it is typically about 11 clock cycles for devices of compute capability 3.x, which
translates to 44 warps for devices of compute capability 3.x (assuming that warps
execute instructions with maximum throughput, otherwise fewer warps are needed).
This is also assuming enough instruction-level parallelism so that schedulers are always
able to issue pairs of instructions for each warp.
If some input operand resides in off-chip memory, the latency is much higher: 200 to
400 clock cycles for devices of compute capability 3.x. The number of warps required
to keep the warp schedulers busy during such high latency periods depends on the
kernel code and its degree of instruction-level parallelism. In general, more warps are
required if the ratio of the number of instructions with no off-chip memory operands
(i.e., arithmetic instructions most of the time) to the number of instructions with off-chip
memory operands is low (this ratio is commonly called the arithmetic intensity of the
program). For example, assume this ratio is 30, also assume the latencies are 300 cycles
on devices of compute capability 3.x. Then about 40 warps are required for devices of
compute capability 3.x (with the same assumptions as in the previous paragraph).
Another reason a warp is not ready to execute its next instruction is that it is waiting
at some memory fence (Memory Fence Functions) or synchronization point (Memory
Fence Functions). A synchronization point can force the multiprocessor to idle as
more and more warps wait for other warps in the same block to complete execution of
instructions prior to the synchronization point. Having multiple resident blocks per
multiprocessor can help reduce idling in this case, as warps from different blocks do not
need to wait for each other at synchronization points.
The number of blocks and warps residing on each multiprocessor for a given kernel
call depends on the execution configuration of the call (Execution Configuration), the
memory resources of the multiprocessor, and the resource requirements of the kernel as
described in Hardware Multithreading. Register and shared memory usage are reported
by the compiler when compiling with the -ptxas-options=-v option.
The total amount of shared memory required for a block is equal to the sum of the
amount of statically allocated shared memory and the amount of dynamically allocated
shared memory.
The number of registers used by a kernel can have a significant impact on the number
of resident warps. For example, for devices of compute capability 6.x, if a kernel uses
64 registers and each block has 512 threads and requires very little shared memory,
then two blocks (i.e., 32 warps) can reside on the multiprocessor since they require
2x512x64 registers, which exactly matches the number of registers available on the
multiprocessor. But as soon as the kernel uses one more register, only one block (i.e.,
16 warps) can be resident since two blocks would require 2x512x65 registers, which are
more registers than are available on the multiprocessor. Therefore, the compiler attempts
to minimize register usage while keeping register spilling (see Device Memory Accesses)
and the number of instructions to a minimum. Register usage can be controlled using
the maxrregcount compiler option or launch bounds as described in Launch Bounds.
Each double variable and each long long variable uses two registers.
www.nvidia.com
CUDA C Programming Guide
PG-02829-001_v9.1 | 76
Performance Guidelines
The effect of execution configuration on performance for a given kernel call generally
depends on the kernel code. Experimentation is therefore recommended. Applications
can also parameterize execution configurations based on register file size and shared
memory size, which depends on the compute capability of the device, as well as on the
number of multiprocessors and memory bandwidth of the device, all of which can be
queried using the runtime (see reference manual).
The number of threads per block should be chosen as a multiple of the warp size to
avoid wasting computing resources with under-populated warps as much as possible.
5.2.3.1. Occupancy Calculator
Several API functions exist to assist programmers in choosing thread block size based on
register and shared memory requirements.
‣
The occupancy calculator API,
cudaOccupancyMaxActiveBlocksPerMultiprocessor, can provide an
occupancy prediction based on the block size and shared memory usage of a kernel.
This function reports occupancy in terms of the number of concurrent thread blocks
per multiprocessor.
‣
‣
Note that this value can be converted to other metrics. Multiplying by
the number of warps per block yields the number of concurrent warps
per multiprocessor; further dividing concurrent warps by max warps per
multiprocessor gives the occupancy as a percentage.
The occupancy-based launch configurator APIs,
cudaOccupancyMaxPotentialBlockSize and
cudaOccupancyMaxPotentialBlockSizeVariableSMem, heuristically calculate
an execution configuration that achieves the maximum multiprocessor-level
occupancy.
www.nvidia.com
CUDA C Programming Guide
PG-02829-001_v9.1 | 77
Performance Guidelines
The following code sample calculates the occupancy of MyKernel. It then reports the
occupancy level with the ratio between concurrent warps versus maximum warps per
multiprocessor.
// Device code
__global__ void MyKernel(int *d, int *a, int *b)
{
int idx = threadIdx.x + blockIdx.x * blockDim.x;
d[idx] = a[idx] * b[idx];
}
// Host code
int main()
{
int numBlocks;
int blockSize = 32;
// Occupancy in terms of active blocks
// These variables are used to convert occupancy to warps
int device;
cudaDeviceProp prop;
int activeWarps;
int maxWarps;
cudaGetDevice(&device);
cudaGetDeviceProperties(&prop, device);
cudaOccupancyMaxActiveBlocksPerMultiprocessor(
&numBlocks,
MyKernel,
blockSize,
0);
activeWarps = numBlocks * blockSize / prop.warpSize;
maxWarps = prop.maxThreadsPerMultiProcessor / prop.warpSize;
std::cout << "Occupancy: " << (double)activeWarps / maxWarps * 100 << "%" <<
std::endl;
}
return 0;
www.nvidia.com
CUDA C Programming Guide
PG-02829-001_v9.1 | 78
Performance Guidelines
The following code sample configures an occupancy-based kernel launch of MyKernel
according to the user input.
// Device code
__global__ void MyKernel(int *array, int arrayCount)
{
int idx = threadIdx.x + blockIdx.x * blockDim.x;
if (idx < arrayCount) {
array[idx] *= array[idx];
}
}
// Host code
int launchMyKernel(int *array, int arrayCount)
{
int blockSize;
// The launch configurator returned block size
int minGridSize;
// The minimum grid size needed to achieve the
// maximum occupancy for a full device
// launch
int gridSize;
// The actual grid size needed, based on input
// size
cudaOccupancyMaxPotentialBlockSize(
&minGridSize,
&blockSize,
(void*)MyKernel,
0,
arrayCount);
// Round up according to array size
gridSize = (arrayCount + blockSize - 1) / blockSize;
MyKernel<<>>(array, arrayCount);
cudaDeviceSynchronize();
// If interested, the occupancy can be calculated with
// cudaOccupancyMaxActiveBlocksPerMultiprocessor
}
return 0;
The CUDA Toolkit also provides a self-documenting, standalone occupancy calculator
and launch configurator implementation in /include/
cuda_occupancy.h for any use cases that cannot depend on the CUDA software stack.
A spreadsheet version of the occupancy calculator is also provided. The spreadsheet
version is particularly useful as a learning tool that visualizes the impact of changes
to the parameters that affect occupancy (block size, registers per thread, and shared
memory per thread).
5.3. Maximize Memory Throughput
The first step in maximizing overall memory throughput for the application is to
minimize data transfers with low bandwidth.
That means minimizing data transfers between the host and the device, as detailed in
Data Transfer between Host and Device, since these have much lower bandwidth than
data transfers between global memory and the device.
That also means minimizing data transfers between global memory and the device
by maximizing use of on-chip memory: shared memory and caches (i.e., L1 cache and
www.nvidia.com
CUDA C Programming Guide
PG-02829-001_v9.1 | 79
Performance Guidelines
L2 cache available on devices of compute capability 2.x and higher, texture cache and
constant cache available on all devices).
Shared memory is equivalent to a user-managed cache: The application explicitly
allocates and accesses it. As illustrated in CUDA C Runtime, a typical programming
pattern is to stage data coming from device memory into shared memory; in other
words, to have each thread of a block:
‣
‣
‣
‣
‣
Load data from device memory to shared memory,
Synchronize with all the other threads of the block so that each thread can safely
read shared memory locations that were populated by different threads,
Process the data in shared memory,
Synchronize again if necessary to make sure that shared memory has been updated
with the results,
Write the results back to device memory.
For some applications (e.g., for which global memory access patterns are datadependent), a traditional hardware-managed cache is more appropriate to exploit data
locality. As mentioned in Compute Capability 3.x and Compute Capability 7.x, for
devices of compute capability 3.x and 7.x, the same on-chip memory is used for both L1
and shared memory, and how much of it is dedicated to L1 versus shared memory is
configurable for each kernel call.
The throughput of memory accesses by a kernel can vary by an order of magnitude
depending on access pattern for each type of memory. The next step in maximizing
memory throughput is therefore to organize memory accesses as optimally as possible
based on the optimal memory access patterns described in Device Memory Accesses.
This optimization is especially important for global memory accesses as global memory
bandwidth is low, so non-optimal global memory accesses have a higher impact on
performance.
5.3.1. Data Transfer between Host and Device
Applications should strive to minimize data transfer between the host and the device.
One way to accomplish this is to move more code from the host to the device, even
if that means running kernels with low parallelism computations. Intermediate data
structures may be created in device memory, operated on by the device, and destroyed
without ever being mapped by the host or copied to host memory.
Also, because of the overhead associated with each transfer, batching many small
transfers into a single large transfer always performs better than making each transfer
separately.
On systems with a front-side bus, higher performance for data transfers between host
and device is achieved by using page-locked host memory as described in Page-Locked
Host Memory.
In addition, when using mapped page-locked memory (Mapped Memory), there is
no need to allocate any device memory and explicitly copy data between device and
host memory. Data transfers are implicitly performed each time the kernel accesses the
mapped memory. For maximum performance, these memory accesses must be coalesced
as with accesses to global memory (see Device Memory Accesses). Assuming that they
www.nvidia.com
CUDA C Programming Guide
PG-02829-001_v9.1 | 80
Performance Guidelines
are and that the mapped memory is read or written only once, using mapped pagelocked memory instead of explicit copies between device and host memory can be a win
for performance.
On integrated systems where device memory and host memory are physically the same,
any copy between host and device memory is superfluous and mapped page-locked
memory should be used instead. Applications may query a device is integrated by
checking that the integrated device property (see Device Enumeration) is equal to 1.
5.3.2. Device Memory Accesses
An instruction that accesses addressable memory (i.e., global, local, shared, constant,
or texture memory) might need to be re-issued multiple times depending on the
distribution of the memory addresses across the threads within the warp. How the
distribution affects the instruction throughput this way is specific to each type of
memory and described in the following sections. For example, for global memory, as a
general rule, the more scattered the addresses are, the more reduced the throughput is.
Global Memory
Global memory resides in device memory and device memory is accessed via 32-, 64-,
or 128-byte memory transactions. These memory transactions must be naturally aligned:
Only the 32-, 64-, or 128-byte segments of device memory that are aligned to their size
(i.e., whose first address is a multiple of their size) can be read or written by memory
transactions.
When a warp executes an instruction that accesses global memory, it coalesces the
memory accesses of the threads within the warp into one or more of these memory
transactions depending on the size of the word accessed by each thread and the
distribution of the memory addresses across the threads. In general, the more
transactions are necessary, the more unused words are transferred in addition to the
words accessed by the threads, reducing the instruction throughput accordingly. For
example, if a 32-byte memory transaction is generated for each thread's 4-byte access,
throughput is divided by 8.
How many transactions are necessary and how much throughput is ultimately affected
varies with the compute capability of the device. Compute Capability 3.x, Compute
Capability 5.x, Compute Capability 6.x and Compute Capability 7.x give more details on
how global memory accesses are handled for various compute capabilities.
To maximize global memory throughput, it is therefore important to maximize
coalescing by:
‣
‣
Following the most optimal access patterns based on Compute Capability 3.x,
Compute Capability 5.x, Compute Capability 6.x and Compute Capability 7.x,
Using data types that meet the size and alignment requirement detailed in Device
Memory Accesses,
www.nvidia.com
CUDA C Programming Guide
PG-02829-001_v9.1 | 81
Performance Guidelines
‣
Padding data in some cases, for example, when accessing a two-dimensional array
as described in Device Memory Accesses.
Size and Alignment Requirement
Global memory instructions support reading or writing words of size equal to 1, 2, 4, 8,
or 16 bytes. Any access (via a variable or a pointer) to data residing in global memory
compiles to a single global memory instruction if and only if the size of the data type
is 1, 2, 4, 8, or 16 bytes and the data is naturally aligned (i.e., its address is a multiple of
that size).
If this size and alignment requirement is not fulfilled, the access compiles to multiple
instructions with interleaved access patterns that prevent these instructions from fully
coalescing. It is therefore recommended to use types that meet this requirement for data
that resides in global memory.
The alignment requirement is automatically fulfilled for the built-in types of char, short,
int, long, longlong, float, double like float2 or float4.
For structures, the size and alignment requirements can be enforced by the compiler
using the alignment specifiers __align__(8) or __align__(16), such as
struct __align__(8) {
float x;
float y;
};
or
struct __align__(16) {
float x;
float y;
float z;
};
Any address of a variable residing in global memory or returned by one of the memory
allocation routines from the driver or runtime API is always aligned to at least 256 bytes.
Reading non-naturally aligned 8-byte or 16-byte words produces incorrect results (off by
a few words), so special care must be taken to maintain alignment of the starting address
of any value or array of values of these types. A typical case where this might be easily
overlooked is when using some custom global memory allocation scheme, whereby the
allocations of multiple arrays (with multiple calls to cudaMalloc() or cuMemAlloc())
is replaced by the allocation of a single large block of memory partitioned into multiple
arrays, in which case the starting address of each array is offset from the block's starting
address.
Two-Dimensional Arrays
A common global memory access pattern is when each thread of index (tx,ty) uses the
following address to access one element of a 2D array of width width, located at address
www.nvidia.com
CUDA C Programming Guide
PG-02829-001_v9.1 | 82
Performance Guidelines
BaseAddress of type type* (where type meets the requirement described in Maximize
Utilization):
BaseAddress + width * ty + tx
For these accesses to be fully coalesced, both the width of the thread block and the width
of the array must be a multiple of the warp size.
In particular, this means that an array whose width is not a multiple of this size will be
accessed much more efficiently if it is actually allocated with a width rounded up to the
closest multiple of this size and its rows padded accordingly. The cudaMallocPitch()
and cuMemAllocPitch() functions and associated memory copy functions described in
the reference manual enable programmers to write non-hardware-dependent code to
allocate arrays that conform to these constraints.
Local Memory
Local memory accesses only occur for some automatic variables as mentioned in
Variable Memory Space Specifiers. Automatic variables that the compiler is likely to
place in local memory are:
‣
‣
‣
Arrays for which it cannot determine that they are indexed with constant quantities,
Large structures or arrays that would consume too much register space,
Any variable if the kernel uses more registers than available (this is also known as
register spilling).
Inspection of the PTX assembly code (obtained by compiling with the -ptx orkeep option) will tell if a variable has been placed in local memory during the first
compilation phases as it will be declared using the .local mnemonic and accessed
using the ld.local and st.local mnemonics. Even if it has not, subsequent
compilation phases might still decide otherwise though if they find it consumes too
much register space for the targeted architecture: Inspection of the cubin object using
cuobjdump will tell if this is the case. Also, the compiler reports total local memory
usage per kernel (lmem) when compiling with the --ptxas-options=-v option. Note
that some mathematical functions have implementation paths that might access local
memory.
The local memory space resides in device memory, so local memory accesses have
same high latency and low bandwidth as global memory accesses and are subject to the
same requirements for memory coalescing as described in Device Memory Accesses.
Local memory is however organized such that consecutive 32-bit words are accessed
by consecutive thread IDs. Accesses are therefore fully coalesced as long as all threads
in a warp access the same relative address (e.g., same index in an array variable, same
member in a structure variable).
On some devices of compute capability 3.x local memory accesses are always cached in
L1 and L2 in the same way as global memory accesses (see Compute Capability 3.x).
www.nvidia.com
CUDA C Programming Guide
PG-02829-001_v9.1 | 83
Performance Guidelines
On devices of compute capability 5.x and 6.x, local memory accesses are always cached
in L2 in the same way as global memory accesses (see Compute Capability 5.x and
Compute Capability 6.x).
Shared Memory
Because it is on-chip, shared memory has much higher bandwidth and much lower
latency than local or global memory.
To achieve high bandwidth, shared memory is divided into equally-sized memory
modules, called banks, which can be accessed simultaneously. Any memory read or
write request made of n addresses that fall in n distinct memory banks can therefore be
serviced simultaneously, yielding an overall bandwidth that is n times as high as the
bandwidth of a single module.
However, if two addresses of a memory request fall in the same memory bank, there is a
bank conflict and the access has to be serialized. The hardware splits a memory request
with bank conflicts into as many separate conflict-free requests as necessary, decreasing
throughput by a factor equal to the number of separate memory requests. If the number
of separate memory requests is n, the initial memory request is said to cause n-way bank
conflicts.
To get maximum performance, it is therefore important to understand how memory
addresses map to memory banks in order to schedule the memory requests so as
to minimize bank conflicts. This is described in Compute Capability 3.x, Compute
Capability 5.x, Compute Capability 6.x, and Compute Capability 7.x for devices of
compute capability 3.x, 5.x, 6.x and 7.x, respectively.
Constant Memory
The constant memory space resides in device memory and is cached in the constant
cache.
A request is then split into as many separate requests as there are different memory
addresses in the initial request, decreasing throughput by a factor equal to the number
of separate requests.
The resulting requests are then serviced at the throughput of the constant cache in case
of a cache hit, or at the throughput of device memory otherwise.
Texture and Surface Memory
The texture and surface memory spaces reside in device memory and are cached in
texture cache, so a texture fetch or surface read costs one memory read from device
memory only on a cache miss, otherwise it just costs one read from texture cache. The
texture cache is optimized for 2D spatial locality, so threads of the same warp that read
texture or surface addresses that are close together in 2D will achieve best performance.
www.nvidia.com
CUDA C Programming Guide
PG-02829-001_v9.1 | 84
Performance Guidelines
Also, it is designed for streaming fetches with a constant latency; a cache hit reduces
DRAM bandwidth demand but not fetch latency.
Reading device memory through texture or surface fetching present some benefits
that can make it an advantageous alternative to reading device memory from global or
constant memory:
‣
‣
‣
‣
If the memory reads do not follow the access patterns that global or constant
memory reads must follow to get good performance, higher bandwidth can be
achieved providing that there is locality in the texture fetches or surface reads;
Addressing calculations are performed outside the kernel by dedicated units;
Packed data may be broadcast to separate variables in a single operation;
8-bit and 16-bit integer input data may be optionally converted to 32 bit floatingpoint values in the range [0.0, 1.0] or [-1.0, 1.0] (see Texture Memory).
5.4. Maximize Instruction Throughput
To maximize instruction throughput the application should:
‣
‣
‣
Minimize the use of arithmetic instructions with low throughput; this includes
trading precision for speed when it does not affect the end result, such as using
intrinsic instead of regular functions (intrinsic functions are listed in Intrinsic
Functions), single-precision instead of double-precision, or flushing denormalized
numbers to zero;
Minimize divergent warps caused by control flow instructions as detailed in Control
Flow Instructions
Reduce the number of instructions, for example, by optimizing out synchronization
points whenever possible as described in Synchronization Instruction or by using
restricted pointers as described in __restrict__.
In this section, throughputs are given in number of operations per clock cycle per
multiprocessor. For a warp size of 32, one instruction corresponds to 32 operations,
so if N is the number of operations per clock cycle, the instruction throughput is N/32
instructions per clock cycle.
All throughputs are for one multiprocessor. They must be multiplied by the number of
multiprocessors in the device to get throughput for the whole device.
5.4.1. Arithmetic Instructions
Table 2 gives the throughputs of the arithmetic instructions that are natively supported
in hardware for devices of various compute capabilities.
Table 2 Throughput of Native Arithmetic Instructions
(Number of Results per Clock Cycle per Multiprocessor)
www.nvidia.com
CUDA C Programming Guide
PG-02829-001_v9.1 | 85
Performance Guidelines
Compute Capability
3.0,
3.2
3.5,
3.7
5.0,
5.2
5.3
6.0
6.1
6.2
7.0
16-bit floatingpoint add,
multiply, multiplyadd
N/A
N/A
N/A
256
128
2
256
128
32-bit floatingpoint add,
multiply, multiplyadd
192
192
128
128
64
128
128
64
64-bit floatingpoint add,
multiply, multiplyadd
8
64
4
4
32
4
4
32
32-bit floatingpoint reciprocal,
reciprocal
square root,
base-2 logarithm
(__log2f), base
2 exponential
(exp2f), sine
(__sinf), cosine
(__cosf)
32
32
32
32
16
32
32
16
32-bit integer add,
extended-precision
add, subtract,
extended-precision
subtract
160
160
128
128
64
128
128
64
32-bit integer
multiply, multiplyadd, extendedprecision multiplyadd
32
32
24-bit integer
multiply
(__[u]mul24)
2
Multiple Multiple Multiple Multiple Multiple
instruct. instruct. instruct. instruct. instruct.
64
3
Multiple Multiple Multiple Multiple Multiple Multiple Multiple Multiple
instruct. instruct. instruct. instruct. instruct. instruct. instruct. instruct.
32-bit integer shift
32
64
4
64
64
32
64
64
64
compare,
minimum,
maximum
160
160
64
64
32
64
64
64
32-bit integer bit
reverse, bit field
extract/insert
32
32
64
64
32
64
64
Multiple
Instruct.
32-bit bitwise AND,
OR, XOR
160
160
128
128
64
128
128
64
2
3
4
www.nvidia.com
CUDA C Programming Guide
8 for GeForce GPUs
32 for extended-precision
32 for GeForce GPUs
PG-02829-001_v9.1 | 86
Performance Guidelines
Compute Capability
3.0,
3.2
3.5,
3.7
5.0,
5.2
5.3
6.0
6.1
6.2
7.0
count of leading
zeros, most
significant non-sign
bit
32
32
32
32
16
32
32
16
population count
32
32
32
32
16
32
32
16
warp shuffle
32
32
32
32
32
32
32
32
sum of absolute
difference
32
32
64
64
32
64
64
64
SIMD video
instructions
vabsdiff2
160
160
Multiple Multiple Multiple Multiple Multiple Multiple
instruct. instruct. instruct. instruct. instruct. instruct.
SIMD video
instructions
vabsdiff4
160
160
Multiple Multiple Multiple Multiple Multiple
instruct. instruct. instruct. instruct. instruct.
All other SIMD
video instructions
32
32
Multiple Multiple Multiple Multiple Multiple Multiple
instruct. instruct. instruct. instruct. instruct. instruct.
Type conversions
from 8-bit and 16bit integer to 32bit types
128
128
Type conversions
from and to 64-bit
types
8
32
All other type
conversions
32
32
5
64
32
32
16
32
32
16
4
4
16
4
4
16
32
32
16
32
32
16
Other instructions and functions are implemented on top of the native instructions.
The implementation may be different for devices of different compute capabilities, and
the number of native instructions after compilation may fluctuate with every compiler
version. For complicated functions, there can be multiple code paths depending on
input. cuobjdump can be used to inspect a particular implementation in a cubin object.
The implementation of some functions are readily available on the CUDA header files
(math_functions.h, device_functions.h, ...).
In general, code compiled with -ftz=true (denormalized numbers are flushed to zero)
tends to have higher performance than code compiled with -ftz=false. Similarly,
code compiled with -prec div=false (less precise division) tends to have higher
performance code than code compiled with -prec div=true, and code compiled
with -prec-sqrt=false (less precise square root) tends to have higher performance
than code compiled with -prec-sqrt=true. The nvcc user manual describes these
compilation flags in more details.
5
www.nvidia.com
CUDA C Programming Guide
8 for GeForce GPUs
PG-02829-001_v9.1 | 87
Performance Guidelines
Single-Precision Floating-Point Division
__fdividef(x, y) (see Intrinsic Functions) provides faster single-precision floating-
point division than the division operator.
Single-Precision Floating-Point Reciprocal Square Root
To preserve IEEE-754 semantics the compiler can optimize 1.0/sqrtf() into rsqrtf()
only when both reciprocal and square root are approximate, (i.e., with -precdiv=false and -prec-sqrt=false). It is therefore recommended to invoke rsqrtf()
directly where desired.
Single-Precision Floating-Point Square Root
Single-precision floating-point square root is implemented as a reciprocal square root
followed by a reciprocal instead of a reciprocal square root followed by a multiplication
so that it gives correct results for 0 and infinity.
Sine and Cosine
sinf(x), cosf(x), tanf(x), sincosf(x), and corresponding double-precision
instructions are much more expensive and even more so if the argument x is large in
magnitude.
More precisely, the argument reduction code (see Mathematical Functions for
implementation) comprises two code paths referred to as the fast path and the slow
path, respectively.
The fast path is used for arguments sufficiently small in magnitude and essentially
consists of a few multiply-add operations. The slow path is used for arguments large in
magnitude and consists of lengthy computations required to achieve correct results over
the entire argument range.
At present, the argument reduction code for the trigonometric functions selects the fast
path for arguments whose magnitude is less than 105615.0f for the single-precision
functions, and less than 2147483648.0 for the double-precision functions.
As the slow path requires more registers than the fast path, an attempt has been made
to reduce register pressure in the slow path by storing some intermediate variables in
local memory, which may affect performance because of local memory high latency and
bandwidth (see Device Memory Accesses). At present, 28 bytes of local memory are
used by single-precision functions, and 44 bytes are used by double-precision functions.
However, the exact amount is subject to change.
Due to the lengthy computations and use of local memory in the slow path, the
throughput of these trigonometric functions is lower by one order of magnitude when
the slow path reduction is required as opposed to the fast path reduction.
www.nvidia.com
CUDA C Programming Guide
PG-02829-001_v9.1 | 88
Performance Guidelines
Integer Arithmetic
Integer division and modulo operation are costly as they compile to up to 20
instructions. They can be replaced with bitwise operations in some cases: If n is a power
of 2, (i/n) is equivalent to (i>>log2(n)) and (i%n) is equivalent to (i&(n-1)); the
compiler will perform these conversions if n is literal.
__brev and __popc map to a single instruction and __brevll and __popcll to a few
instructions.
__[u]mul24 are legacy intrinsic functions that no longer have any reason to be used.
Half Precision Arithmetic
In order to achieve good half precision floating-point add, multiply or multiply-add
throughput it is recommended that the half2 datatype is used. Vector intrinsics
(eg. __hadd2, __hsub2, __hmul2, __hfma2) can then be used to do two operations
in a single instruction. Using half2 in place of two calls using half may also help
performance of other intrinsics, such as warp shuffles.
The intrinsic __halves2half2 is provided to convert two half precision values to the
half2 datatype.
Type Conversion
Sometimes, the compiler must insert conversion instructions, introducing additional
execution cycles. This is the case for:
‣
‣
Functions operating on variables of type char or short whose operands generally
need to be converted to int,
Double-precision floating-point constants (i.e., those constants defined without
any type suffix) used as input to single-precision floating-point computations (as
mandated by C/C++ standards).
This last case can be avoided by using single-precision floating-point constants, defined
with an f suffix such as 3.141592653589793f, 1.0f, 0.5f.
5.4.2. Control Flow Instructions
Any flow control instruction (if, switch, do, for, while) can significantly impact the
effective instruction throughput by causing threads of the same warp to diverge (i.e., to
follow different execution paths). If this happens, the different executions paths have to
be serialized, increasing the total number of instructions executed for this warp.
To obtain best performance in cases where the control flow depends on the thread
ID, the controlling condition should be written so as to minimize the number of
divergent warps. This is possible because the distribution of the warps across the block
is deterministic as mentioned in SIMT Architecture. A trivial example is when the
controlling condition only depends on (threadIdx / warpSize) where warpSize is
www.nvidia.com
CUDA C Programming Guide
PG-02829-001_v9.1 | 89
Performance Guidelines
the warp size. In this case, no warp diverges since the controlling condition is perfectly
aligned with the warps.
Sometimes, the compiler may unroll loops or it may optimize out short if or switch
blocks by using branch predication instead, as detailed below. In these cases, no warp
can ever diverge. The programmer can also control loop unrolling using the #pragma
unroll directive (see #pragma unroll).
When using branch predication none of the instructions whose execution depends on
the controlling condition gets skipped. Instead, each of them is associated with a perthread condition code or predicate that is set to true or false based on the controlling
condition and although each of these instructions gets scheduled for execution, only
the instructions with a true predicate are actually executed. Instructions with a false
predicate do not write results, and also do not evaluate addresses or read operands.
5.4.3. Synchronization Instruction
Throughput for __syncthreads() is 128 operations per clock cycle for devices of
compute capability 3.x, 32 operations per clock cycle for devices of compute capability
6.0 and 7.0 and 64 operations per clock cycle for devices of compute capability 5.x, 6.1
and 6.2.
Note that __syncthreads() can impact performance by forcing the multiprocessor to
idle as detailed in Device Memory Accesses.
www.nvidia.com
CUDA C Programming Guide
PG-02829-001_v9.1 | 90
Appendix A.
CUDA-ENABLED GPUS
http://developer.nvidia.com/cuda-gpus lists all CUDA-enabled devices with their
compute capability.
The compute capability, number of multiprocessors, clock frequency, total amount of
device memory, and other properties can be queried using the runtime (see reference
manual).
www.nvidia.com
CUDA C Programming Guide
PG-02829-001_v9.1 | 91
Appendix B.
C LANGUAGE EXTENSIONS
B.1. Function Execution Space Specifiers
Function execution space specifiers denote whether a function executes on the host or on
the device and whether it is callable from the host or from the device.
B.1.1. __device__
The __device__ execution space specifier declares a function that is:
‣
‣
Executed on the device,
Callable from the device only.
The __global__ and __device__ execution space specifiers cannot be used together.
B.1.2. __global__
The __global__ exection space specifier declares a function as being a kernel. Such a
function is:
‣
‣
‣
Executed on the device,
Callable from the host,
Callable from the device for devices of compute capability 3.2 or higher (see CUDA
Dynamic Parallelism for more details).
A __global__ function must have void return type, and cannot be a member of a class.
Any call to a __global__ function must specify its execution configuration as described
in Execution Configuration.
A call to a __global__ function is asynchronous, meaning it returns before the device
has completed its execution.
www.nvidia.com
CUDA C Programming Guide
PG-02829-001_v9.1 | 92
C Language Extensions
B.1.3. __host__
The __host__ execution space specifier declares a function that is:
‣
‣
Executed on the host,
Callable from the host only.
It is equivalent to declare a function with only the __host__ execution space specifier or
to declare it without any of the __host__, __device__, or __global__ execution space
specifier; in either case the function is compiled for the host only.
The __global__ and __host__ execution space specifiers cannot be used together.
The __device__ and __host__ execution space specifiers can be used together
however, in which case the function is compiled for both the host and the device.
The __CUDA_ARCH__ macro introduced in Application Compatibility can be used to
differentiate code paths between host and device:
__host__ __device__ func()
{
#if __CUDA_ARCH__ >= 600
// Device code path for compute
#elif __CUDA_ARCH__ >= 500
// Device code path for compute
#elif __CUDA_ARCH__ >= 300
// Device code path for compute
#elif __CUDA_ARCH__ >= 200
// Device code path for compute
#elif !defined(__CUDA_ARCH__)
// Host code path
#endif
}
capability 6.x
capability 5.x
capability 3.x
capability 2.x
B.1.4. __noinline__ and __forceinline__
The compiler inlines any __device__ function when deemed appropriate.
The __noinline__ function qualifier can be used as a hint for the compiler not to inline
the function if possible.
The __forceinline__ function qualifier can be used to force the compiler to inline the
function.
The __noinline__ and __forceinline__ function qualifiers cannot be used together,
and neither function qualifier can be applied to an inline function.
B.2. Variable Memory Space Specifiers
Variable memory space specifiers denote the memory location on the device of a
variable.
An automatic variable declared in device code without any of the __device__,
__shared__ and __constant__ memory space specifiers described in this section
www.nvidia.com
CUDA C Programming Guide
PG-02829-001_v9.1 | 93
C Language Extensions
generally resides in a register. However in some cases the compiler might choose to
place it in local memory, which can have adverse performance consequences as detailed
in Device Memory Accesses.
B.2.1. __device__
The __device__ memory space specifier declares a variable that resides on the device.
At most one of the other memory space specifiers defined in the next two sections may
be used together with __device__ to further denote which memory space the variable
belongs to. If none of them is present, the variable:
‣
‣
‣
‣
Resides in global memory space,
Has the lifetime of the CUDA context in which it is created,
Has a distinct object per device,
Is accessible from all the threads within the grid and from the host through
the runtime library (cudaGetSymbolAddress() / cudaGetSymbolSize() /
cudaMemcpyToSymbol() / cudaMemcpyFromSymbol()).
B.2.2. __constant__
The __constant__ memory space specifier, optionally used together with __device__,
declares a variable that:
‣
‣
‣
‣
Resides in constant memory space,
Has the lifetime of the CUDA context in which it is created,
Has a distinct object per device,
Is accessible from all the threads within the grid and from the host through
the runtime library (cudaGetSymbolAddress() / cudaGetSymbolSize() /
cudaMemcpyToSymbol() / cudaMemcpyFromSymbol()).
B.2.3. __shared__
The __shared__ memory space specifier, optionally used together with __device__,
declares a variable that:
‣
‣
‣
‣
Resides in the shared memory space of a thread block,
Has the lifetime of the block,
Has a distinct object per block,
Is only accessible from all the threads within the block.
When declaring a variable in shared memory as an external array such as
extern __shared__ float shared[];
the size of the array is determined at launch time (see Execution Configuration). All
variables declared in this fashion, start at the same address in memory, so that the layout
www.nvidia.com
CUDA C Programming Guide
PG-02829-001_v9.1 | 94
C Language Extensions
of the variables in the array must be explicitly managed through offsets. For example, if
one wants the equivalent of
short array0[128];
float array1[64];
int
array2[256];
in dynamically allocated shared memory, one could declare and initialize the arrays the
following way:
extern __shared__ float array[];
__device__ void func()
// __device__ or __global__ function
{
short* array0 = (short*)array;
float* array1 = (float*)&array0[128];
int*
array2 =
(int*)&array1[64];
}
Note that pointers need to be aligned to the type they point to, so the following code, for
example, does not work since array1 is not aligned to 4 bytes.
extern __shared__ float array[];
__device__ void func()
// __device__ or __global__ function
{
short* array0 = (short*)array;
float* array1 = (float*)&array0[127];
}
Alignment requirements for the built-in vector types are listed in Table 3.
B.2.4. __managed__
The __managed__ memory space specifier, optionally used together with __device__,
declares a variable that:
‣
‣
Can be referenced from both device and host code, e.g., its address can be taken or it
can be read or written directly from a device or host function.
Has the lifetime of an application.
See __managed__ Memory Space Specifier for more details.
B.2.5. __restrict__
nvcc supports restricted pointers via the __restrict__ keyword.
Restricted pointers were introduced in C99 to alleviate the aliasing problem that exists in
C-type languages, and which inhibits all kind of optimization from code re-ordering to
common sub-expression elimination.
www.nvidia.com
CUDA C Programming Guide
PG-02829-001_v9.1 | 95
C Language Extensions
Here is an example subject to the aliasing issue, where use of restricted pointer can help
the compiler to reduce the number of instructions:
void foo(const float* a,
const float* b,
float* c)
{
c[0] = a[0] * b[0];
c[1] = a[0] * b[0];
c[2] = a[0] * b[0] * a[1];
c[3] = a[0] * a[1];
c[4] = a[0] * b[0];
c[5] = b[0];
...
}
In C-type languages, the pointers a, b, and c may be aliased, so any write through c
could modify elements of a or b. This means that to guarantee functional correctness, the
compiler cannot load a[0] and b[0] into registers, multiply them, and store the result
to both c[0] and c[1], because the results would differ from the abstract execution
model if, say, a[0] is really the same location as c[0]. So the compiler cannot take
advantage of the common sub-expression. Likewise, the compiler cannot just reorder the
computation of c[4] into the proximity of the computation of c[0] and c[1] because
the preceding write to c[3] could change the inputs to the computation of c[4].
By making a, b, and c restricted pointers, the programmer asserts to the compiler that
the pointers are in fact not aliased, which in this case means writes through c would
never overwrite elements of a or b. This changes the function prototype as follows:
void foo(const float* __restrict__ a,
const float* __restrict__ b,
float* __restrict__ c);
Note that all pointer arguments need to be made restricted for the compiler optimizer
to derive any benefit. With the __restrict__ keywords added, the compiler can now
reorder and do common sub-expression elimination at will, while retaining functionality
identical with the abstract execution model:
void foo(const float* __restrict__ a,
const float* __restrict__ b,
float* __restrict__ c)
{
float t0 = a[0];
float t1 = b[0];
float t2 = t0 * t2;
float t3 = a[1];
c[0] = t2;
c[1] = t2;
c[4] = t2;
c[2] = t2 * t3;
c[3] = t0 * t3;
c[5] = t1;
...
}
The effects here are a reduced number of memory accesses and reduced number of
computations. This is balanced by an increase in register pressure due to "cached" loads
and common sub-expressions.
www.nvidia.com
CUDA C Programming Guide
PG-02829-001_v9.1 | 96
C Language Extensions
Since register pressure is a critical issue in many CUDA codes, use of restricted pointers
can have negative performance impact on CUDA code, due to reduced occupancy.
B.3. Built-in Vector Types
B.3.1. char, short, int, long, longlong, float, double
These are vector types derived from the basic integer and floating-point types. They
are structures and the 1st, 2nd, 3rd, and 4th components are accessible through the
fields x, y, z, and w, respectively. They all come with a constructor function of the form
make_; for example,
int2 make_int2(int x, int y);
which creates a vector of type int2 with value(x, y).
The alignment requirements of the vector types are detailed in Table 3.
Table 3 Alignment Requirements
Type
Alignment
char1, uchar1
1
char2, uchar2
2
char3, uchar3
1
char4, uchar4
4
short1, ushort1
2
short2, ushort2
4
short3, ushort3
2
short4, ushort4
8
int1, uint1
4
int2, uint2
8
int3, uint3
4
int4, uint4
16
long1, ulong1
4 if sizeof(long) is equal to sizeof(int) 8, otherwise
long2, ulong2
8 if sizeof(long) is equal to sizeof(int), 16, otherwise
long3, ulong3
4 if sizeof(long) is equal to sizeof(int), 8, otherwise
long4, ulong4
16
www.nvidia.com
CUDA C Programming Guide
PG-02829-001_v9.1 | 97
C Language Extensions
Type
Alignment
longlong1, ulonglong1
8
longlong2, ulonglong2
16
float1
4
float2
8
float3
4
float4
16
double1
8
double2
16
B.3.2. dim3
This type is an integer vector type based on uint3 that is used to specify dimensions.
When defining a variable of type dim3, any component left unspecified is initialized to 1.
B.4. Built-in Variables
Built-in variables specify the grid and block dimensions and the block and thread
indices. They are only valid within functions that are executed on the device.
B.4.1. gridDim
This variable is of type dim3 (see dim3) and contains the dimensions of the grid.
B.4.2. blockIdx
This variable is of type uint3 (see char, short, int, long, longlong, float, double) and
contains the block index within the grid.
B.4.3. blockDim
This variable is of type dim3 (see dim3) and contains the dimensions of the block.
B.4.4. threadIdx
This variable is of type uint3 (see char, short, int, long, longlong, float, double ) and
contains the thread index within the block.
www.nvidia.com
CUDA C Programming Guide
PG-02829-001_v9.1 | 98
C Language Extensions
B.4.5. warpSize
This variable is of type int and contains the warp size in threads (see SIMT Architecture
for the definition of a warp).
B.5. Memory Fence Functions
The CUDA programming model assumes a device with a weakly-ordered memory
model, that is the order in which a CUDA thread writes data to shared memory, global
memory, page-locked host memory, or the memory of a peer device is not necessarily the
order in which the data is observed being written by another CUDA or host thread.
For example, if thread 1 executes writeXY() and thread 2 executes readXY() as
defined in the following code sample
__device__ volatile int X = 1, Y = 2;
__device__ void writeXY()
{
X = 10;
Y = 20;
}
__device__ void readXY()
{
int A = X;
int B = Y;
}
it is possible that B ends up equal to 20 and A equal to 1 for thread 2. In a stronglyordered memory model, the only possibilities would be:
‣
‣
‣
A equal to 1 and B equal to 2,
A equal to 10 and B equal to 2,
A equal to 10 and B equal to 20,
Memory fence functions can be used to enforce some ordering on memory accesses. The
memory fence functions differ in the scope in which the orderings are enforced but they
are independent of the accessed memory space (shared memory, global memory, pagelocked host memory, and the memory of a peer device).
void __threadfence_block();
ensures that:
‣
All writes to all memory made by the calling thread before the call to
__threadfence_block() are observed by all threads in the block of the calling
thread as occurring before all writes to all memory made by the calling thread after
the call to __threadfence_block();
www.nvidia.com
CUDA C Programming Guide
PG-02829-001_v9.1 | 99
C Language Extensions
‣
All reads from all memory made by the calling thread before the call to
__threadfence_block() are ordered before all reads from all memory made by
the calling thread after the call to __threadfence_block().
void __threadfence();
acts as __threadfence_block() for all threads in the block of the calling thread and
also ensures that no writes to all memory made by the calling thread after the call to
__threadfence() are observed by any thread in the device as occurring before any
write to all memory made by the calling thread before the call to __threadfence().
Note that for this ordering guarantee to be true, the observing threads must truly
observe the memory and not cached versions of it; this is ensured by using the
volatile keyword as detailed in Volatile Qualifier.
void __threadfence_system();
acts as __threadfence_block() for all threads in the block of the calling thread and
also ensures that all writes to all memory made by the calling thread before the call to
__threadfence_system() are observed by all threads in the device, host threads,
and all threads in peer devices as occurring before all writes to all memory made by the
calling thread after the call to __threadfence_system().
__threadfence_system() is only supported by devices of compute capability 2.x and
higher.
In the previous code sample, inserting a fence function call between X = 10; and Y
= 20; and between int A = X; and int B = Y; would ensure that for thread 2, A
will always be equal to 10 if B is equal to 20. If thread 1 and 2 belong to the same block,
it is enough to use __threadfence_block(). If thread 1 and 2 do not belong to the
same block, __threadfence() must be used if they are CUDA threads from the same
device and __threadfence_system() must be used if they are CUDA threads from
two different devices.
A common use case is when threads consume some data produced by other threads as
illustrated by the following code sample of a kernel that computes the sum of an array
of N numbers in one call. Each block first sums a subset of the array and stores the result
in global memory. When all blocks are done, the last block done reads each of these
partial sums from global memory and sums them to obtain the final result. In order to
determine which block is finished last, each block atomically increments a counter to
signal that it is done with computing and storing its partial sum (see Atomic Functions
about atomic functions). The last block is the one that receives the counter value equal
to gridDim.x-1. If no fence is placed between storing the partial sum and incrementing
the counter, the counter might increment before the partial sum is stored and therefore,
might reach gridDim.x-1 and let the last block start reading partial sums before they
have been actually updated in memory.
Memory fence functions only affect the ordering of memory operations by a thread;
they do not ensure that these memory operations are visible to other threads (like
__syncthreads() does for threads within a block (see Synchronization Functions)). In
www.nvidia.com
CUDA C Programming Guide
PG-02829-001_v9.1 | 100
C Language Extensions
the code sample below, the visibility of memory operations on the result variable is
ensured by declaring it as volatile (see Volatile Qualifier).
__device__ unsigned int count = 0;
__shared__ bool isLastBlockDone;
__global__ void sum(const float* array, unsigned int N,
volatile float* result)
{
// Each block sums a subset of the input array.
float partialSum = calculatePartialSum(array, N);
if (threadIdx.x == 0) {
// Thread 0 of each block stores the partial sum
// to global memory. The compiler will use
// a store operation that bypasses the L1 cache
// since the "result" variable is declared as
// volatile. This ensures that the threads of
// the last block will read the correct partial
// sums computed by all other blocks.
result[blockIdx.x] = partialSum;
// Thread 0 makes sure that the incrementation
// of the "count" variable is only performed after
// the partial sum has been written to global memory.
__threadfence();
// Thread 0 signals that it is done.
unsigned int value = atomicInc(&count, gridDim.x);
}
// Thread 0 determines if its block is the last
// block to be done.
isLastBlockDone = (value == (gridDim.x - 1));
// Synchronize to make sure that each thread reads
// the correct value of isLastBlockDone.
__syncthreads();
if (isLastBlockDone) {
// The last block sums the partial sums
// stored in result[0 .. gridDim.x-1]
float totalSum = calculateTotalSum(result);
if (threadIdx.x == 0) {
}
}
}
// Thread 0 of last block stores the total sum
// to global memory and resets the count
// varialble, so that the next kernel call
// works properly.
result[0] = totalSum;
count = 0;
B.6. Synchronization Functions
void __syncthreads();
www.nvidia.com
CUDA C Programming Guide
PG-02829-001_v9.1 | 101
C Language Extensions
waits until all threads in the thread block have reached this point and all global and
shared memory accesses made by these threads prior to __syncthreads() are visible
to all threads in the block.
__syncthreads() is used to coordinate communication between the threads of the
same block. When some threads within a block access the same addresses in shared
or global memory, there are potential read-after-write, write-after-read, or write-afterwrite hazards for some of these memory accesses. These data hazards can be avoided by
synchronizing threads in-between these accesses.
__syncthreads() is allowed in conditional code but only if the conditional evaluates
identically across the entire thread block, otherwise the code execution is likely to hang
or produce unintended side effects.
Devices of compute capability 2.x and higher support three variations of
__syncthreads() described below.
int __syncthreads_count(int predicate);
is identical to __syncthreads() with the additional feature that it evaluates predicate
for all threads of the block and returns the number of threads for which predicate
evaluates to non-zero.
int __syncthreads_and(int predicate);
is identical to __syncthreads() with the additional feature that it evaluates predicate
for all threads of the block and returns non-zero if and only if predicate evaluates to nonzero for all of them.
int __syncthreads_or(int predicate);
is identical to __syncthreads() with the additional feature that it evaluates predicate
for all threads of the block and returns non-zero if and only if predicate evaluates to nonzero for any of them.
void __syncwarp(unsigned mask=0xffffffff);
will cause the executing thread to wait until all warp lanes named in mask have
executed a __syncwarp() (with the same mask) before resuming execution. All nonexited threads named in mask must execute a corresponding __syncwarp() with the
same mask, or the result is undefined.
Executing __syncwarp() guarantees memory ordering among threads participating in
the barrier. Thus, threads within a warp that wish to communicate via memory can store
to memory, execute __syncwarp(), and then safely read values stored by other threads
in the warp.
For .target sm_6x or below, all threads in mask must execute the same
__syncwarp() in convergence, and the union of all values in mask must be equal to
the active mask. Otherwise, the behavior is undefined.
www.nvidia.com
CUDA C Programming Guide
PG-02829-001_v9.1 | 102
C Language Extensions
B.7. Mathematical Functions
The reference manual lists all C/C++ standard library mathematical functions that are
supported in device code and all intrinsic functions that are only supported in device
code.
Mathematical Functions provides accuracy information for some of these functions
when relevant.
B.8. Texture Functions
Texture objects are described in Texture Object API
Texture references are described in Texture Reference API
Texture fetching is described in Texture Fetching.
B.8.1. Texture Object API
B.8.1.1. tex1Dfetch()
template
T tex1Dfetch(cudaTextureObject_t texObj, int x);
fetches from the region of linear memory specified by the one-dimensional texture
object texObj using integer texture coordinate x. tex1Dfetch() only works with nonnormalized coordinates, so only the border and clamp addressing modes are supported.
It does not perform any texture filtering. For integer types, it may optionally promote
the integer to single-precision floating point.
B.8.1.2. tex1D()
template
T tex1D(cudaTextureObject_t texObj, float x);
fetches from the CUDA array specified by the one-dimensional texture object texObj
using texture coordinate x.
B.8.1.3. tex1DLod()
template
T tex1DLod(cudaTextureObject_t texObj, float x, float level);
fetches from the CUDA array specified by the one-dimensional texture object texObj
using texture coordinate x at the level-of-detail level.
B.8.1.4. tex1DGrad()
template
T tex1DGrad(cudaTextureObject_t texObj, float x, float dx, float dy);
www.nvidia.com
CUDA C Programming Guide
PG-02829-001_v9.1 | 103
C Language Extensions
fetches from the CUDA array specified by the one-dimensional texture object texObj
using texture coordinate x. The level-of-detail is derived from the X-gradient dx and Ygradient dy.
B.8.1.5. tex2D()
template
T tex2D(cudaTextureObject_t texObj, float x, float y);
fetches from the CUDA array or the region of linear memory specified by the twodimensional texture object texObj using texture coordinate (x,y).
B.8.1.6. tex2DLod()
template
tex2DLod(cudaTextureObject_t texObj, float x, float y, float level);
fetches from the CUDA array or the region of linear memory specified by the twodimensional texture object texObj using texture coordinate (x,y) at level-of-detail
level.
B.8.1.7. tex2DGrad()
template
T tex2DGrad(cudaTextureObject_t texObj, float x, float y,
float2 dx, float2 dy);
fetches from the CUDA array specified by the two-dimensional texture object texObj
using texture coordinate (x,y). The level-of-detail is derived from the dx and dy
gradients.
B.8.1.8. tex3D()
template
T tex3D(cudaTextureObject_t texObj, float x, float y, float z);
fetches from the CUDA array specified by the three-dimensional texture object texObj
using texture coordinate (x,y,z).
B.8.1.9. tex3DLod()
template
T tex3DLod(cudaTextureObject_t texObj, float x, float y, float z, float level);
fetches from the CUDA array or the region of linear memory specified by the threedimensional texture object texObj using texture coordinate (x,y,z) at level-of-detail
level.
B.8.1.10. tex3DGrad()
template
T tex3DGrad(cudaTextureObject_t texObj, float x, float y, float z,
float4 dx, float4 dy);
www.nvidia.com
CUDA C Programming Guide
PG-02829-001_v9.1 | 104
C Language Extensions
fetches from the CUDA array specified by the three-dimensional texture object texObj
using texture coordinate (x,y,z) at a level-of-detail derived from the X and Y gradients
dx and dy.
B.8.1.11. tex1DLayered()
template
T tex1DLayered(cudaTextureObject_t texObj, float x, int layer);
fetches from the CUDA array specified by the one-dimensional texture object texObj
using texture coordinate x and index layer, as described in Layered Textures
B.8.1.12. tex1DLayeredLod()
template
T tex1DLayeredLod(cudaTextureObject_t texObj, float x, int layer, float level);
fetches from the CUDA array specified by the one-dimensional layered texture at layer
layer using texture coordinate x and level-of-detail level.
B.8.1.13. tex1DLayeredGrad()
template
T tex1DLayeredGrad(cudaTextureObject_t texObj, float x, int layer,
float dx, float dy);
fetches from the CUDA array specified by the one-dimensional layered texture at layer
layer using texture coordinate x and a level-of-detail derived from the dx and dy
gradients.
B.8.1.14. tex2DLayered()
template
T tex2DLayered(cudaTextureObject_t texObj,
float x, float y, int layer);
fetches from the CUDA array specified by the two-dimensional texture object texObj
using texture coordinate (x,y) and index layer, as described in Layered Textures.
B.8.1.15. tex2DLayeredLod()
template
T tex2DLayeredLod(cudaTextureObject_t texObj, float x, float y, int layer,
float level);
fetches from the CUDA array specified by the two-dimensional layered texture at layer
layer using texture coordinate (x,y).
B.8.1.16. tex2DLayeredGrad()
template
T tex2DLayeredGrad(cudaTextureObject_t texObj, float x, float y, int layer,
float2 dx, float2 dy);
www.nvidia.com
CUDA C Programming Guide
PG-02829-001_v9.1 | 105
C Language Extensions
fetches from the CUDA array specified by the two-dimensional layered texture at layer
layer using texture coordinate (x,y) and a level-of-detail derived from the dx and dy
X and Y gradients.
B.8.1.17. texCubemap()
template
T texCubemap(cudaTextureObject_t texObj, float x, float y, float z);
fetches the CUDA array specified by the three-dimensional texture object texObj using
texture coordinate (x,y,z), as described in Cubemap Textures.
B.8.1.18. texCubemapLod()
template
T texCubemapLod(cudaTextureObject_t texObj, float x, float, y, float z,
float level);
fetches from the CUDA array specified by the three-dimensional texture object texObj
using texture coordinate (x,y,z) as described in Cubemap Textures. The level-of-detail
used is given by level.
B.8.1.19. texCubemapLayered()
template
T texCubemapLayered(cudaTextureObject_t texObj,
float x, float y, float z, int layer);
fetches from the CUDA array specified by the cubemap layered texture object texObj
using texture coordinates (x,y,z), and index layer, as described in Cubemap Layered
Textures.
B.8.1.20. texCubemapLayeredLod()
template
T texCubemapLayeredLod(cudaTextureObject_t texObj, float x, float y, float z,
int layer, float level);
fetches from the CUDA array specified by the cubemap layered texture object texObj
using texture coordinate (x,y,z) and index layer, as described in Cubemap Layered
Textures, at level-of-detail level level.
B.8.1.21. tex2Dgather()
template
T tex2Dgather(cudaTextureObject_t texObj,
float x, float y, int comp = 0);
fetches from the CUDA array specified by the 2D texture object texObj using texture
coordinates x and y and the comp parameter as described in Texture Gather.
www.nvidia.com
CUDA C Programming Guide
PG-02829-001_v9.1 | 106
C Language Extensions
B.8.2. Texture Reference API
B.8.2.1. tex1Dfetch()
template
Type tex1Dfetch(
texture texRef,
int x);
float tex1Dfetch(
texture texRef,
int x);
float tex1Dfetch(
texture texRef,
int x);
float tex1Dfetch(
texture texRef,
int x);
float tex1Dfetch(
texture texRef,
int x);
fetches from the region of linear memory bound to the one-dimensional texture
reference texRef using integer texture coordinate x. tex1Dfetch() only works with
non-normalized coordinates, so only the border and clamp addressing modes are
supported. It does not perform any texture filtering. For integer types, it may optionally
promote the integer to single-precision floating point.
Besides the functions shown above, 2-, and 4-tuples are supported; for example:
float4 tex1Dfetch(
texture texRef,
int x);
fetches from the region of linear memory bound to texture reference texRef using
texture coordinate x.
B.8.2.2. tex1D()
template
Type tex1D(texture texRef,
float x);
fetches from the CUDA array bound to the one-dimensional texture reference texRef
using texture coordinate x. Type is equal to DataType except when readMode is equal
to cudaReadModeNormalizedFloat (see Texture Reference API), in which case Type is
equal to the matching floating-point type.
www.nvidia.com
CUDA C Programming Guide
PG-02829-001_v9.1 | 107
C Language Extensions
B.8.2.3. tex1DLod()
template
Type tex1DLod(texture texRef, float x,
float level);
fetches from the CUDA array bound to the one-dimensional texture reference texRef
using texture coordinate x. The level-of-detail is given by level. Type is the same as
DataType except when readMode is cudaReadModeNormalizedFloat (see Texture
Reference API), in which case Type is the corresponding floating-point type.
B.8.2.4. tex1DGrad()
template
Type tex1DGrad(texture texRef, float x,
float dx, float dy);
fetches from the CUDA array bound to the one-dimensional texture reference
texRef using texture coordinate x. The level-of-detail is derived from the dx and
dy X- and Y-gradients. Type is the same as DataType except when readMode is
cudaReadModeNormalizedFloat (see Texture Reference API), in which case Type is
the corresponding floating-point type.
B.8.2.5. tex2D()
template
Type tex2D(texture texRef,
float x, float y);
fetches from the CUDA array or the region of linear memory bound to the twodimensional texture reference texRef using texture coordinates x and y. Type is equal
to DataType except when readMode is equal to cudaReadModeNormalizedFloat (see
Texture Reference API), in which case Type is equal to the matching floating-point type.
B.8.2.6. tex2DLod()
template
Type tex2DLod(texture texRef,
float x, float y, float level);
fetches from the CUDA array bound to the two-dimensional texture reference texRef
using texture coordinate (x,y). The level-of-detail is given by level. Type is the same
as DataType except when readMode is cudaReadModeNormalizedFloat (see Texture
Reference API), in which case Type is the corresponding floating-point type.
B.8.2.7. tex2DGrad()
template
Type tex2DGrad(texture texRef,
float x, float y, float2 dx, float2 dy);
www.nvidia.com
CUDA C Programming Guide
PG-02829-001_v9.1 | 108
C Language Extensions
fetches from the CUDA array bound to the two-dimensional texture reference
texRef using texture coordinate (x,y). The level-of-detail is derived from the dx
and dy X- and Y-gradients. Type is the same as DataType except when readMode is
cudaReadModeNormalizedFloat (see Texture Reference API), in which case Type is
the corresponding floating-point type.
B.8.2.8. tex3D()
template
Type tex3D(texture texRef,
float x, float y, float z);
fetches from the CUDA array bound to the three-dimensional texture reference texRef
using texture coordinates x, y, and z. Type is equal to DataType except when readMode
is equal to cudaReadModeNormalizedFloat (see Texture Reference API), in which case
Type is equal to the matching floating-point type.
B.8.2.9. tex3DLod()
template
Type tex3DLod(texture texRef,
float x, float y, float z, float level);
fetches from the CUDA array bound to the two-dimensional texture reference texRef
using texture coordinate (x,y,z). The level-of-detail is given by level. Type is the
same as DataType except when readMode is cudaReadModeNormalizedFloat (see
Texture Reference API), in which case Type is the corresponding floating-point type.
B.8.2.10. tex3DGrad()
template
Type tex3DGrad(texture texRef,
float x, float y, float z, float4 dx, float4 dy);
fetches from the CUDA array bound to the two-dimensional texture reference texRef
using texture coordinate (x,y,z). The level-of-detail is derived from the dx and
dy X- and Y-gradients. Type is the same as DataType except when readMode is
cudaReadModeNormalizedFloat (see Texture Reference API), in which case Type is
the corresponding floating-point type.
B.8.2.11. tex1DLayered()
template
Type tex1DLayered(
texture texRef,
float x, int layer);
fetches from the CUDA array bound to the one-dimensional layered texture
reference texRef using texture coordinate x and index layer, as described in
Layered Textures. Type is equal to DataType except when readMode is equal to
www.nvidia.com
CUDA C Programming Guide
PG-02829-001_v9.1 | 109
C Language Extensions
cudaReadModeNormalizedFloat (see Texture Reference API), in which case Type is
equal to the matching floating-point type.
B.8.2.12. tex1DLayeredLod()
template
Type tex1DLayeredLod(texture texRef,
float x, int layer, float level);
fetches from the CUDA array bound to the one-dimensional texture reference texRef
using texture coordinate x and index layer as described in Layered Textures. The levelof-detail is given by level. Type is the same as DataType except when readMode is
cudaReadModeNormalizedFloat (see Texture Reference API), in which case Type is
the corresponding floating-point type.
B.8.2.13. tex1DLayeredGrad()
template
Type tex1DLayeredGrad(texture texRef,
float x, int layer, float dx, float dy);
fetches from the CUDA array bound to the one-dimensional texture reference texRef
using texture coordinate x and index layer as described in Layered Textures. The
level-of-detail is derived from the dx and dy X- and Y-gradients. Type is the same as
DataType except when readMode is cudaReadModeNormalizedFloat (see Texture
Reference API), in which case Type is the corresponding floating-point type.
B.8.2.14. tex2DLayered()
template
Type tex2DLayered(
texture texRef,
float x, float y, int layer);
fetches from the CUDA array bound to the two-dimensional layered texture
reference texRef using texture coordinates x and y, and index layer, as described
in Texture Memory. Type is equal to DataType except when readMode is equal to
cudaReadModeNormalizedFloat (see Texture Reference API), in which case Type is
equal to the matching floating-point type.
B.8.2.15. tex2DLayeredLod()
template
Type tex2DLayeredLod(texture texRef,
float x, float y, int layer, float level);
fetches from the CUDA array bound to the two-dimensional texture reference texRef
using texture coordinate (x,y) and index layer as described in Layered Textures. The
level-of-detail is given by level. Type is the same as DataType except when readMode
is cudaReadModeNormalizedFloat (see Texture Reference API), in which case Type is
the corresponding floating-point type.
www.nvidia.com
CUDA C Programming Guide
PG-02829-001_v9.1 | 110
C Language Extensions
B.8.2.16. tex2DLayeredGrad()
template
Type tex2DLayeredGrad(texture texRef,
float x, float y, int layer, float2 dx, float2 dy);
fetches from the CUDA array bound to the two-dimensional texture reference texRef
using texture coordinate (x,y) and index layer as described in Layered Textures. The
level-of-detail is derived from the dx and dy X- and Y-gradients. Type is the same as
DataType except when readMode is cudaReadModeNormalizedFloat (see Texture
Reference API), in which case Type is the corresponding floating-point type.
B.8.2.17. texCubemap()
template
Type texCubemap(
texture texRef,
float x, float y, float z);
fetches from the CUDA array bound to the cubemap texture reference texRef using
texture coordinates x, y, and z, as described in Cubemap Textures. Type is equal to
DataType except when readMode is equal to cudaReadModeNormalizedFloat (see
Texture Reference API), in which case Type is equal to the matching floating-point type.
B.8.2.18. texCubemapLod()
template
Type texCubemapLod(texture texRef,
float x, float y, float z, float level);
fetches from the CUDA array bound to the two-dimensional texture reference texRef
using texture coordinate (x,y,z). The level-of-detail is given by level. Type is the
same as DataType except when readMode is cudaReadModeNormalizedFloat (see
Texture Reference API), in which case Type is the corresponding floating-point type.
B.8.2.19. texCubemapLayered()
template
Type texCubemapLayered(
texture texRef,
float x, float y, float z, int layer);
fetches from the CUDA array bound to the cubemap layered texture reference texRef
using texture coordinates x, y, and z, and index layer, as described in Cubemap
Layered Textures. Type is equal to DataType except when readMode is equal to
cudaReadModeNormalizedFloat (see Texture Reference API), in which case Type is
equal to the matching floating-point type.
B.8.2.20. texCubemapLayeredLod()
template
Type texCubemapLayeredLod(texture texRef,
float x, float y, float z, int layer, float level);
www.nvidia.com
CUDA C Programming Guide
PG-02829-001_v9.1 | 111
C Language Extensions
fetches from the CUDA array bound to the two-dimensional texture reference texRef
using texture coordinate (x,y,z) and index layer as described in Layered Textures.
The level-of-detail is given by level. Type is the same as DataType except when
readMode is cudaReadModeNormalizedFloat (see Texture Reference API), in which
case Type is the corresponding floating-point type.
B.8.2.21. tex2Dgather()
template
Type tex2Dgather(
texture texRef,
float x, float y, int comp = 0);
fetches from the CUDA array bound to the 2D texture reference texRef using texture
coordinates x and y and the comp parameter as described in Texture Gather. Type is a 4component vector type. It is based on the base type of DataType except when readMode
is equal to cudaReadModeNormalizedFloat (see Texture Reference API), in which case
it is always float4.
B.9. Surface Functions
Surface functions are only supported by devices of compute capability 2.0 and higher.
Surface objects are described in described in Surface Object API
Surface references are described in Surface Reference API.
In the sections below, boundaryMode specifies the boundary mode, that is how out-ofrange surface coordinates are handled; it is equal to either cudaBoundaryModeClamp,
in which case out-of-range coordinates are clamped to the valid range, or
cudaBoundaryModeZero, in which case out-of-range reads return zero and out-of-range
writes are ignored, or cudaBoundaryModeTrap, in which case out-of-range accesses
cause the kernel execution to fail.
B.9.1. Surface Object API
B.9.1.1. surf1Dread()
template
T surf1Dread(cudaSurfaceObject_t surfObj, int x,
boundaryMode = cudaBoundaryModeTrap);
reads the CUDA array specified by the one-dimensional surface object surfObj using
coordinate x.
B.9.1.2. surf1Dwrite
template
void surf1Dwrite(T data,
cudaSurfaceObject_t surfObj,
int x,
boundaryMode = cudaBoundaryModeTrap);
www.nvidia.com
CUDA C Programming Guide
PG-02829-001_v9.1 | 112
C Language Extensions
writes value data to the CUDA array specified by the one-dimensional surface object
surfObj at coordinate x.
B.9.1.3. surf2Dread()
template
T surf2Dread(cudaSurfaceObject_t surfObj,
int x, int y,
boundaryMode = cudaBoundaryModeTrap);
template
void surf2Dread(T* data,
cudaSurfaceObject_t surfObj,
int x, int y,
boundaryMode = cudaBoundaryModeTrap);
reads the CUDA array specified by the two-dimensional surface object surfObj using
coordinates x and y.
B.9.1.4. surf2Dwrite()
template
void surf2Dwrite(T data,
cudaSurfaceObject_t surfObj,
int x, int y,
boundaryMode = cudaBoundaryModeTrap);
writes value data to the CUDA array specified by the two-dimensional surface object
surfObj at coordinate x and y.
B.9.1.5. surf3Dread()
template
T surf3Dread(cudaSurfaceObject_t surfObj,
int x, int y, int z,
boundaryMode = cudaBoundaryModeTrap);
template
void surf3Dread(T* data,
cudaSurfaceObject_t surfObj,
int x, int y, int z,
boundaryMode = cudaBoundaryModeTrap);
reads the CUDA array specified by the three-dimensional surface object surfObj using
coordinates x, y, and z.
B.9.1.6. surf3Dwrite()
template
void surf3Dwrite(T data,
cudaSurfaceObject_t surfObj,
int x, int y, int z,
boundaryMode = cudaBoundaryModeTrap);
writes value data to the CUDA array specified by the three-dimensional object surfObj
at coordinate x, y, and z.
www.nvidia.com
CUDA C Programming Guide
PG-02829-001_v9.1 | 113
C Language Extensions
B.9.1.7. surf1DLayeredread()
template
T surf1DLayeredread(
cudaSurfaceObject_t surfObj,
int x, int layer,
boundaryMode = cudaBoundaryModeTrap);
template
void surf1DLayeredread(T data,
cudaSurfaceObject_t surfObj,
int x, int layer,
boundaryMode = cudaBoundaryModeTrap);
reads the CUDA array specified by the one-dimensional layered surface object surfObj
using coordinate x and index layer.
B.9.1.8. surf1DLayeredwrite()
template
void surf1DLayeredwrite(T data,
cudaSurfaceObject_t surfObj,
int x, int layer,
boundaryMode = cudaBoundaryModeTrap);
writes value data to the CUDA array specified by the two-dimensional layered surface
object surfObj at coordinate x and index layer.
B.9.1.9. surf2DLayeredread()
template
T surf2DLayeredread(
cudaSurfaceObject_t surfObj,
int x, int y, int layer,
boundaryMode = cudaBoundaryModeTrap);
template
void surf2DLayeredread(T data,
cudaSurfaceObject_t surfObj,
int x, int y, int layer,
boundaryMode = cudaBoundaryModeTrap);
reads the CUDA array specified by the two-dimensional layered surface object surfObj
using coordinate x and y, and index layer.
B.9.1.10. surf2DLayeredwrite()
template
void surf2DLayeredwrite(T data,
cudaSurfaceObject_t surfObj,
int x, int y, int layer,
boundaryMode = cudaBoundaryModeTrap);
writes value data to the CUDA array specified by the one-dimensional layered surface
object surfObj at coordinate x and y, and index layer.
www.nvidia.com
CUDA C Programming Guide
PG-02829-001_v9.1 | 114
C Language Extensions
B.9.1.11. surfCubemapread()
template
T surfCubemapread(
cudaSurfaceObject_t surfObj,
int x, int y, int face,
boundaryMode = cudaBoundaryModeTrap);
template
void surfCubemapread(T data,
cudaSurfaceObject_t surfObj,
int x, int y, int face,
boundaryMode = cudaBoundaryModeTrap);
reads the CUDA array specified by the cubemap surface object surfObj using
coordinate x and y, and face index face.
B.9.1.12. surfCubemapwrite()
template
void surfCubemapwrite(T data,
cudaSurfaceObject_t surfObj,
int x, int y, int face,
boundaryMode = cudaBoundaryModeTrap);
writes value data to the CUDA array specified by the cubemap object surfObj at
coordinate x and y, and face index face.
B.9.1.13. surfCubemapLayeredread()
template
T surfCubemapLayeredread(
cudaSurfaceObject_t surfObj,
int x, int y, int layerFace,
boundaryMode = cudaBoundaryModeTrap);
template
void surfCubemapLayeredread(T data,
cudaSurfaceObject_t surfObj,
int x, int y, int layerFace,
boundaryMode = cudaBoundaryModeTrap);
reads the CUDA array specified by the cubemap layered surface object surfObj using
coordinate x and y, and index layerFace.
B.9.1.14. surfCubemapLayeredwrite()
template
void surfCubemapLayeredwrite(T data,
cudaSurfaceObject_t surfObj,
int x, int y, int layerFace,
boundaryMode = cudaBoundaryModeTrap);
writes value data to the CUDA array specified by the cubemap layered object surfObj at
coordinate x and y, and index layerFace.
www.nvidia.com
CUDA C Programming Guide
PG-02829-001_v9.1 | 115
C Language Extensions
B.9.2. Surface Reference API
B.9.2.1. surf1Dread()
template
Type surf1Dread(surface surfRef,
int x,
boundaryMode = cudaBoundaryModeTrap);
template
void surf1Dread(Type data,
surface surfRef,
int x,
boundaryMode = cudaBoundaryModeTrap);
reads the CUDA array bound to the one-dimensional surface reference surfRef using
coordinate x.
B.9.2.2. surf1Dwrite
template
void surf1Dwrite(Type data,
surface surfRef,
int x,
boundaryMode = cudaBoundaryModeTrap);
writes value data to the CUDA array bound to the one-dimensional surface reference
surfRef at coordinate x.
B.9.2.3. surf2Dread()
template
Type surf2Dread(surface surfRef,
int x, int y,
boundaryMode = cudaBoundaryModeTrap);
template
void surf2Dread(Type* data,
surface surfRef,
int x, int y,
boundaryMode = cudaBoundaryModeTrap);
reads the CUDA array bound to the two-dimensional surface reference surfRef using
coordinates x and y.
B.9.2.4. surf2Dwrite()
template
void surf3Dwrite(Type data,
surface surfRef,
int x, int y, int z,
boundaryMode = cudaBoundaryModeTrap);
writes value data to the CUDA array bound to the two-dimensional surface reference
surfRef at coordinate x and y.
www.nvidia.com
CUDA C Programming Guide
PG-02829-001_v9.1 | 116
C Language Extensions
B.9.2.5. surf3Dread()
template
Type surf3Dread(surface surfRef,
int x, int y, int z,
boundaryMode = cudaBoundaryModeTrap);
template
void surf3Dread(Type* data,
surface surfRef,
int x, int y, int z,
boundaryMode = cudaBoundaryModeTrap);
reads the CUDA array bound to the three-dimensional surface reference surfRef using
coordinates x, y, and z.
B.9.2.6. surf3Dwrite()
template
void surf3Dwrite(Type data,
surface surfRef,
int x, int y, int z,
boundaryMode = cudaBoundaryModeTrap);
writes value data to the CUDA array bound to the three-dimensional surface reference
surfRef at coordinate x, y, and z.
B.9.2.7. surf1DLayeredread()
template
Type surf1DLayeredread(
surface surfRef,
int x, int layer,
boundaryMode = cudaBoundaryModeTrap);
template
void surf1DLayeredread(Type data,
surface surfRef,
int x, int layer,
boundaryMode = cudaBoundaryModeTrap);
reads the CUDA array bound to the one-dimensional layered surface reference surfRef
using coordinate x and index layer.
B.9.2.8. surf1DLayeredwrite()
template
void surf1DLayeredwrite(Type data,
surface surfRef,
int x, int layer,
boundaryMode = cudaBoundaryModeTrap);
writes value data to the CUDA array bound to the two-dimensional layered surface
reference surfRef at coordinate x and index layer.
www.nvidia.com
CUDA C Programming Guide
PG-02829-001_v9.1 | 117
C Language Extensions
B.9.2.9. surf2DLayeredread()
template
Type surf2DLayeredread(
surface surfRef,
int x, int y, int layer,
boundaryMode = cudaBoundaryModeTrap);
template
void surf2DLayeredread(Type data,
surface surfRef,
int x, int y, int layer,
boundaryMode = cudaBoundaryModeTrap);
reads the CUDA array bound to the two-dimensional layered surface reference surfRef
using coordinate x and y, and index layer.
B.9.2.10. surf2DLayeredwrite()
template
void surf2DLayeredwrite(Type data,
surface surfRef,
int x, int y, int layer,
boundaryMode = cudaBoundaryModeTrap);
writes value data to the CUDA array bound to the one-dimensional layered surface
reference surfRef at coordinate x and y, and index layer.
B.9.2.11. surfCubemapread()
template
Type surfCubemapread(
surface surfRef,
int x, int y, int face,
boundaryMode = cudaBoundaryModeTrap);
template
void surfCubemapread(Type data,
surface surfRef,
int x, int y, int face,
boundaryMode = cudaBoundaryModeTrap);
reads the CUDA array bound to the cubemap surface reference surfRef using
coordinate x and y, and face index face.
B.9.2.12. surfCubemapwrite()
template
void surfCubemapwrite(Type data,
surface surfRef,
int x, int y, int face,
boundaryMode = cudaBoundaryModeTrap);
writes value data to the CUDA array bound to the cubemap reference surfRef at
coordinate x and y, and face index face.
www.nvidia.com
CUDA C Programming Guide
PG-02829-001_v9.1 | 118
C Language Extensions
B.9.2.13. surfCubemapLayeredread()
template
Type surfCubemapLayeredread(
surface surfRef,
int x, int y, int layerFace,
boundaryMode = cudaBoundaryModeTrap);
template
void surfCubemapLayeredread(Type data,
surface surfRef,
int x, int y, int layerFace,
boundaryMode = cudaBoundaryModeTrap);
reads the CUDA array bound to the cubemap layered surface reference surfRef using
coordinate x and y, and index layerFace.
B.9.2.14. surfCubemapLayeredwrite()
template
void surfCubemapLayeredwrite(Type data,
surface surfRef,
int x, int y, int layerFace,
boundaryMode = cudaBoundaryModeTrap);
writes value data to the CUDA array bound to the cubemap layered reference surfRef
at coordinate x and y, and index layerFace.
B.10. Read-Only Data Cache Load Function
The read-only data cache load function is only supported by devices of compute
capability 3.5 and higher.
T __ldg(const T* address);
returns the data of type T located at address address, where T is char, short, int,
long long unsigned char, unsigned short, unsigned int, unsigned long
long, int2, int4, uint2, uint4, float, float2, float4, double, or double2. The
operation is cached in the read-only data cache (see Global Memory).
B.11. Time Function
clock_t clock();
long long int clock64();
when executed in device code, returns the value of a per-multiprocessor counter that is
incremented every clock cycle. Sampling this counter at the beginning and at the end of
a kernel, taking the difference of the two samples, and recording the result per thread
provides a measure for each thread of the number of clock cycles taken by the device to
completely execute the thread, but not of the number of clock cycles the device actually
spent executing thread instructions. The former number is greater than the latter since
threads are time sliced.
www.nvidia.com
CUDA C Programming Guide
PG-02829-001_v9.1 | 119
C Language Extensions
B.12. Atomic Functions
An atomic function performs a read-modify-write atomic operation on one 32-bit or 64bit word residing in global or shared memory. For example, atomicAdd() reads a word
at some address in global or shared memory, adds a number to it, and writes the result
back to the same address. The operation is atomic in the sense that it is guaranteed to
be performed without interference from other threads. In other words, no other thread
can access this address until the operation is complete. Atomic functions do not act as
memory fences and do not imply synchronization or ordering constraints for memory
operations (see Memory Fence Functions for more details on memory fences). Atomic
functions can only be used in device functions.
On GPU architectures with compute capability lower than 6.x, atomics operations done
from the GPU are atomic only with respect to that GPU. If the GPU attempts an atomic
operation to a peer GPU’s memory, the operation appears as a regular read followed
by a write to the peer GPU, and the two operations are not done as one single atomic
operation. Similarly, atomic operations from the GPU to CPU memory will not be atomic
with respect to CPU initiated atomic operations.
Compute capability 6.x introduces new type of atomics which allows developers to
widen or narrow the scope of an atomic operation. For example, atomicAdd_system
guarantees that the instruction is atomic with respect to other CPUs and GPUs in the
system. atomicAdd_block implies that the instruction is atomic only with respect
atomics from other threads in the same thread block. In the following example both CPU
and GPU can atomically update integer value at address addr:
__global__ void mykernel(int *addr) {
atomicAdd_system(addr, 10);
// only available on devices with compute
capability 6.x
}
void foo() {
int *addr;
cudaMallocManaged(&addr, 4);
*addr = 0;
}
mykernel<<<...>>>(addr);
__sync_fetch_and_add(addr, 10);
// CPU atomic operation
The new scoped versions of atomics are available for all atomics listed below only for
compute capabilities 6.x and later.
Note that any atomic operation can be implemented based on atomicCAS() (Compare
And Swap). For example, atomicAdd() for double-precision floating-point numbers
www.nvidia.com
CUDA C Programming Guide
PG-02829-001_v9.1 | 120
C Language Extensions
is not available on devices with compute capability lower than 6.0 but it can be
implemented as follows:
#if __CUDA_ARCH__ < 600
__device__ double atomicAdd(double* address, double val)
{
unsigned long long int* address_as_ull =
(unsigned long long int*)address;
unsigned long long int old = *address_as_ull, assumed;
do {
assumed = old;
old = atomicCAS(address_as_ull, assumed,
__double_as_longlong(val +
__longlong_as_double(assumed)));
// Note: uses integer comparison to avoid hang in case of NaN (since NaN !=
NaN)
} while (assumed != old);
return __longlong_as_double(old);
}
#endif
B.12.1. Arithmetic Functions
B.12.1.1. atomicAdd()
int atomicAdd(int* address, int val);
unsigned int atomicAdd(unsigned int* address,
unsigned int val);
unsigned long long int atomicAdd(unsigned long long int* address,
unsigned long long int val);
float atomicAdd(float* address, float val);
double atomicAdd(double* address, double val);
reads the 32-bit or 64-bit word old located at the address address in global or shared
memory, computes (old + val), and stores the result back to memory at the same
address. These three operations are performed in one atomic transaction. The function
returns old.
The 32-bit floating-point version of atomicAdd() is only supported by devices of
compute capability 2.x and higher.
The 64-bit floating-point version of atomicAdd() is only supported by devices of
compute capability 6.x and higher.
B.12.1.2. atomicSub()
int atomicSub(int* address, int val);
unsigned int atomicSub(unsigned int* address,
unsigned int val);
reads the 32-bit word old located at the address address in global or shared memory,
computes (old - val), and stores the result back to memory at the same address.
These three operations are performed in one atomic transaction. The function returns
old.
www.nvidia.com
CUDA C Programming Guide
PG-02829-001_v9.1 | 121
C Language Extensions
B.12.1.3. atomicExch()
int atomicExch(int* address, int val);
unsigned int atomicExch(unsigned int* address,
unsigned int val);
unsigned long long int atomicExch(unsigned long long int* address,
unsigned long long int val);
float atomicExch(float* address, float val);
reads the 32-bit or 64-bit word old located at the address address in global or shared
memory and stores val back to memory at the same address. These two operations are
performed in one atomic transaction. The function returns old.
B.12.1.4. atomicMin()
int atomicMin(int* address, int val);
unsigned int atomicMin(unsigned int* address,
unsigned int val);
unsigned long long int atomicMin(unsigned long long int* address,
unsigned long long int val);
reads the 32-bit or 64-bit word old located at the address address in global or shared
memory, computes the minimum of old and val, and stores the result back to memory
at the same address. These three operations are performed in one atomic transaction.
The function returns old.
The 64-bit version of atomicMin() is only supported by devices of compute capability
3.5 and higher.
B.12.1.5. atomicMax()
int atomicMax(int* address, int val);
unsigned int atomicMax(unsigned int* address,
unsigned int val);
unsigned long long int atomicMax(unsigned long long int* address,
unsigned long long int val);
reads the 32-bit or 64-bit word old located at the address address in global or shared
memory, computes the maximum of old and val, and stores the result back to memory
at the same address. These three operations are performed in one atomic transaction.
The function returns old.
The 64-bit version of atomicMax() is only supported by devices of compute capability
3.5 and higher.
B.12.1.6. atomicInc()
unsigned int atomicInc(unsigned int* address,
unsigned int val);
reads the 32-bit word old located at the address address in global or shared memory,
computes ((old >= val) ? 0 : (old+1)), and stores the result back to memory at
the same address. These three operations are performed in one atomic transaction. The
function returns old.
www.nvidia.com
CUDA C Programming Guide
PG-02829-001_v9.1 | 122
C Language Extensions
B.12.1.7. atomicDec()
unsigned int atomicDec(unsigned int* address,
unsigned int val);
reads the 32-bit word old located at the address address in global or shared memory,
computes (((old == 0) | (old > val)) ? val : (old-1) ), and stores the
result back to memory at the same address. These three operations are performed in one
atomic transaction. The function returns old.
B.12.1.8. atomicCAS()
int atomicCAS(int* address, int compare, int val);
unsigned int atomicCAS(unsigned int* address,
unsigned int compare,
unsigned int val);
unsigned long long int atomicCAS(unsigned long long int* address,
unsigned long long int compare,
unsigned long long int val);
reads the 32-bit or 64-bit word old located at the address address in global or shared
memory, computes (old == compare ? val : old) , and stores the result back
to memory at the same address. These three operations are performed in one atomic
transaction. The function returns old (Compare And Swap).
B.12.2. Bitwise Functions
B.12.2.1. atomicAnd()
int atomicAnd(int* address, int val);
unsigned int atomicAnd(unsigned int* address,
unsigned int val);
unsigned long long int atomicAnd(unsigned long long int* address,
unsigned long long int val);
reads the 32-bit or 64-bit word old located at the address address in global or shared
memory, computes (old & val), and stores the result back to memory at the same
address. These three operations are performed in one atomic transaction. The function
returns old.
The 64-bit version of atomicAnd() is only supported by devices of compute capability
3.5 and higher.
B.12.2.2. atomicOr()
int atomicOr(int* address, int val);
unsigned int atomicOr(unsigned int* address,
unsigned int val);
unsigned long long int atomicOr(unsigned long long int* address,
unsigned long long int val);
reads the 32-bit or 64-bit word old located at the address address in global or shared
memory, computes (old | val), and stores the result back to memory at the same
www.nvidia.com
CUDA C Programming Guide
PG-02829-001_v9.1 | 123
C Language Extensions
address. These three operations are performed in one atomic transaction. The function
returns old.
The 64-bit version of atomicOr() is only supported by devices of compute capability
3.5 and higher.
B.12.2.3. atomicXor()
int atomicXor(int* address, int val);
unsigned int atomicXor(unsigned int* address,
unsigned int val);
unsigned long long int atomicXor(unsigned long long int* address,
unsigned long long int val);
reads the 32-bit or 64-bit word old located at the address address in global or shared
memory, computes (old ^ val), and stores the result back to memory at the same
address. These three operations are performed in one atomic transaction. The function
returns old.
The 64-bit version of atomicXor() is only supported by devices of compute capability
3.5 and higher.
B.13. Warp Vote Functions
int __all_sync(unsigned mask, int predicate);
int __any_sync(unsigned mask, int predicate);
unsigned __ballot_sync(unsigned mask, int predicate);
unsigned __activemask();
Deprecation notice: __any, __all, and __ballot have been deprecated as of CUDA 9.0.
The warp vote functions allow the threads of a given warp to perform a reduction-andbroadcast operation. These functions take as input an integer predicate from each
thread in the warp and compare those values with zero. The results of the comparisons
are combined (reduced) across the active threads of the warp in one of the following
ways, broadcasting a single return value to each participating thread:
__all_sync(unsigned mask, predicate):
Evaluate predicate for all non-exited threads in mask and return non-zero if and
only if predicate evaluates to non-zero for all of them.
__any_sync(unsigned mask, predicate):
Evaluate predicate for all non-exited threads in mask and return non-zero if and
only if predicate evaluates to non-zero for any of them.
__ballot_sync(unsigned mask, predicate):
Evaluate predicate for all non-exited threads in mask and return an integer whose
Nth bit is set if and only if predicate evaluates to non-zero for the Nth thread of the
warp and the Nth thread is active.
__activemask():
Returns a 32-bit integer mask of all currently active threads in the calling warp.
The Nth bit is set if the Nth lane in the warp is active when __activemask() is
called. Inactive threads are represented by 0 bits in the returned mask. Threads
www.nvidia.com
CUDA C Programming Guide
PG-02829-001_v9.1 | 124
C Language Extensions
which have exited the program are always marked as inactive. Note that threads that
are convergent at an __activemask() call are not guaranteed to be convergent at
subsequent instructions unless those instructions are synchronizing warp-builtin
functions.
Notes
For __all_sync, __any_sync, and __ballot_sync, a mask must be passed that
specifies the threads participating in the call. A bit, representing the thread's lane ID,
must be set for each participating thread to ensure they are properly converged before
the intrinsic is executed by the hardware. All active threads named in mask must
execute the same intrinsic with the same mask, or the result is undefined.
B.14. Warp Match Functions
__match_any_sync and __match_all_sync perform a broadcast-and-compare
operation of a variable between threads within a warp.
Supported by devices of compute capability 7.x or higher.
B.14.1. Synopsys
*pred);
unsigned int __match_any_sync(unsigned mask, T value);
unsigned int __match_all_sync(unsigned mask, T value, int
T can be int, unsigned int, long, unsigned long, long long, unsigned long
long, float or double.
B.14.2. Description
The __match_sync() intrinsics permit a broadcast-and-compare of a value value
across threads in a warp after synchronizing threads named in mask.
__match_any_sync
Returns mask of threads that have same value of value in mask
__match_all_sync
Returns mask if all threads in mask have the same value for value; otherwise 0 is
returned. Predicate pred is set to true if all threads in mask have the same value of
value; otherwise the predicate is set to false.
The new *_sync match intrinsics take in a mask indicating the threads participating in
the call. A bit, representing the thread's lane id, must be set for each participating thread
to ensure they are properly converged before the intrinsic is executed by the hardware.
All non-exited threads named in mask must execute the same intrinsic with the same
mask, or the result is undefined.
www.nvidia.com
CUDA C Programming Guide
PG-02829-001_v9.1 | 125
C Language Extensions
B.15. Warp Shuffle Functions
__shfl_sync, __shfl_up_sync, __shfl_down_sync, and __shfl_xor_sync
exchange a variable between threads within a warp.
Supported by devices of compute capability 3.x or higher.
Deprecation Notice: __shfl, __shfl_up, __shfl_down, and __shfl_xor have been
deprecated as of CUDA 9.0.
B.15.1. Synopsis
T __shfl_sync(unsigned mask, T var, int srcLane, int width=warpSize);
T __shfl_up_sync(unsigned mask, T var, unsigned int delta, int width=warpSize);
T __shfl_down_sync(unsigned mask, T var, unsigned int delta, int
width=warpSize);
T __shfl_xor_sync(unsigned mask, T var, int laneMask, int width=warpSize);
T can be int, unsigned int, long, unsigned long, long long, unsigned long
long, float or double. With the cuda_fp16.h header included, T can also be __half
or __half2.
B.15.2. Description
The __shfl_sync() intrinsics permit exchanging of a variable between threads within
a warp without use of shared memory. The exchange occurs simultaneously for all active
threads within the warp (and named in mask), moving 4 or 8 bytes of data per thread
depending on the type.
Threads within a warp are referred to as lanes, and may have an index between 0 and
warpSize-1 (inclusive). Four source-lane addressing modes are supported:
__shfl_sync()
Direct copy from indexed lane
__shfl_up_sync()
Copy from a lane with lower ID relative to caller
__shfl_down_sync()
Copy from a lane with higher ID relative to caller
__shfl_xor_sync()
Copy from a lane based on bitwise XOR of own lane ID
Threads may only read data from another thread which is actively participating in
the __shfl_sync() command. If the target thread is inactive, the retrieved value is
undefined.
www.nvidia.com
CUDA C Programming Guide
PG-02829-001_v9.1 | 126
C Language Extensions
All of the __shfl_sync() intrinsics take an optional width parameter which alters
the behavior of the intrinsic. width must have a value which is a power of 2; results are
undefined if width is not a power of 2, or is a number greater than warpSize.
__shfl_sync() returns the value of var held by the thread whose ID is given by
srcLane. If width is less than warpSize then each subsection of the warp behaves as
a separate entity with a starting logical lane ID of 0. If srcLane is outside the range
[0:width-1], the value returned corresponds to the value of var held by the srcLane
modulo width (i.e. within the same subsection).
__shfl_up_sync() calculates a source lane ID by subtracting delta from the caller's
lane ID. The value of var held by the resulting lane ID is returned: in effect, var is
shifted up the warp by delta lanes. If width is less than warpSize then each subsection
of the warp behaves as a separate entity with a starting logical lane ID of 0. The source
lane index will not wrap around the value of width, so effectively the lower delta lanes
will be unchanged.
__shfl_down_sync() calculates a source lane ID by adding delta to the caller's lane
ID. The value of var held by the resulting lane ID is returned: this has the effect of
shifting var down the warp by delta lanes. If width is less than warpSize then each
subsection of the warp behaves as a separate entity with a starting logical lane ID of 0.
As for __shfl_up_sync(), the ID number of the source lane will not wrap around the
value of width and so the upper delta lanes will remain unchanged.
__shfl_xor_sync() calculates a source line ID by performing a bitwise XOR of
the caller's lane ID with laneMask: the value of var held by the resulting lane ID is
returned. If width is less than warpSize then each group of width consecutive threads
are able to access elements from earlier groups of threads, however if they attempt to
access elements from later groups of threads their own value of var will be returned.
This mode implements a butterfly addressing pattern such as is used in tree reduction
and broadcast.
The new *_sync shfl intrinsics take in a mask indicating the threads participating in the
call. A bit, representing the thread's lane id, must be set for each participating thread to
ensure they are properly converged before the intrinsic is executed by the hardware. All
non-exited threads named in mask must execute the same intrinsic with the same mask,
or the result is undefined.
B.15.3. Return Value
All __shfl_sync() intrinsics return the 4-byte word referenced by var from the source
lane ID as an unsigned integer. If the source lane ID is out of range or the source thread
has exited, the calling thread's own var is returned.
www.nvidia.com
CUDA C Programming Guide
PG-02829-001_v9.1 | 127
C Language Extensions
B.15.4. Notes
Threads may only read data from another thread which is actively participating in
the __shfl_sync() command. If the target thread is inactive, the retrieved value is
undefined.
width must be a power-of-2 (i.e., 2, 4, 8, 16 or 32). Results are unspecified for other
values.
B.15.5. Examples
B.15.5.1. Broadcast of a single value across a warp
#include
__global__ void bcast(int arg) {
int laneId = threadIdx.x & 0x1f;
int value;
if (laneId == 0)
// Note unused variable for
value = arg;
// all threads except lane 0
value = __shfl_sync(0xffffffff, value, 0);
// Synchronize all threads in
warp, and get "value" from lane 0
if (value != arg)
printf("Thread %d failed.\n", threadIdx.x);
}
int main() {
bcast<<< 1, 32 >>>(1234);
cudaDeviceSynchronize();
}
return 0;
www.nvidia.com
CUDA C Programming Guide
PG-02829-001_v9.1 | 128
C Language Extensions
B.15.5.2. Inclusive plus-scan across sub-partitions of 8 threads
#include
__global__ void scan4() {
int laneId = threadIdx.x & 0x1f;
// Seed sample starting value (inverse of lane ID)
int value = 31 - laneId;
// Loop to accumulate scan within my partition.
// Scan requires log2(n) == 3 steps for 8 threads
// It works by an accumulated sum up the warp
// by 1, 2, 4, 8 etc. steps.
for (int i=1; i<=4; i*=2) {
// We do the __shfl_sync unconditionally so that we
// can read even from threads which won't do a
// sum, and then conditionally assign the result.
int n = __shfl_up_sync(0xffffffff, value, i, 8);
if ((laneId & 7) >= i)
value += n;
}
}
printf("Thread %d final value = %d\n", threadIdx.x, value);
int main() {
scan4<<< 1, 32 >>>();
cudaDeviceSynchronize();
}
return 0;
B.15.5.3. Reduction across a warp
#include
__global__ void warpReduce() {
int laneId = threadIdx.x & 0x1f;
// Seed starting value as inverse lane ID
int value = 31 - laneId;
// Use XOR mode to perform butterfly reduction
for (int i=16; i>=1; i/=2)
value += __shfl_xor_sync(0xffffffff, value, i, 32);
}
// "value" now contains the sum across all threads
printf("Thread %d final value = %d\n", threadIdx.x, value);
int main() {
warpReduce<<< 1, 32 >>>();
cudaDeviceSynchronize();
}
return 0;
B.16. Warp matrix functions [PREVIEW FEATURE]
C++ warp matrix operations leverage Tensor Cores to accelerate matrix problems of the
form D=A*B+C. This requires co-operation from all threads in a warp.
www.nvidia.com
CUDA C Programming Guide
PG-02829-001_v9.1 | 129
C Language Extensions
These warp matrix functions are a preview feature supported by devices of compute
capability 7.0 or higher. The data structures and APIs described here are subject to
change in future releases, and may not be compatible with those future releases.
B.16.1. Description
All following functions and types are defined in the namespace nvcuda::wmma.
template
class fragment;
void load_matrix_sync(fragment<...> &a, const T* mptr, unsigned ldm);
void load_matrix_sync(fragment<...> &a, const T* mptr, unsigned ldm, layout_t
layout);
void store_matrix_sync(T* mptr, const fragment<...> &a, unsigned ldm, layout_t
layout);
void fill_fragment(fragment<...> &a, const T& v);
void mma_sync(fragment<...> &d, const fragment<...> &a, const fragment<...>
&b, const fragment<...> &c, bool satf=false);
fragment
An overloaded class containing a section of a matrix distributed across all threads
in the warp. The mapping of matrix elements into fragment internal storage is
unspecified and subject to change in future architectures.
Only certain combinations of template arguments are allowed. The first template
parameter specifies how the fragment will participate in the matrix operation.
Acceptable values for Use are:
‣
‣
‣
matrix_a when the fragment is used as the first multiplicand, A,
matrix_b when the fragment is used as the second multiplicand, B, or
accumulator when the fragment is used as the source or destination
accumulators (C or D, respectively).
The m, n and k sizes describe the shape of the warp-wide matrix tiles participating
in the multiply-accumulate operation. The dimension of each tile depends on its
role. For matrix_a the tile takes dimension m x k; for matrix_b the dimension
is k x n, and accumulator tiles are m x n. Only the three following (m, n, k)
configurations are supported: (16, 16, 16), (32, 8, 16), and (8, 32, 16).
The data type, T, must be __half for multiplicands, and can be either __half or
float for accumulators. The Layout parameter must be specified for matrix_a and
matrix_b fragments. row_major or col_major indicate that elements within a
matrix row or column are contiguous in memory, respectively. The Layout parameter
for an accumulator matrix should retain the default value of void. A row or column
layout is specified only when the accumulator is loaded or stored as described below.
load_matrix_sync
Waits until all threads in the warp are converged and then loads the matrix fragment
a from memory. mptr must be a 128-bit aligned pointer pointing to the first element
of the matrix in memory. ldm describes the stride in elements between consecutive
rows (for row major layout) or columns (for column major layout) and must be a
www.nvidia.com
CUDA C Programming Guide
PG-02829-001_v9.1 | 130
C Language Extensions
multiple of 16 bytes (i.e., 8 __half elements or 4 float elements). If the fragment is
an accumulator, the layout argument must be specified as either mem_row_major or
mem_col_major. For matrix_a and matrix_b fragments, the layout is inferred from
the fragment's Layout parameter. The values of mptr, ldm, layout and all template
parameters for a must be the same for all threads in the warp. This function must be
called by all threads in the warp, or the result is undefined.
store_matrix_sync
Waits until all threads in the warp are converged and then stores the matrix fragment
a to memory. mptr must be a 128-bit aligned pointer pointing to the first element
of the matrix in memory. ldm describes the stride in elements between consecutive
rows (for row major layout) or columns (for column major layout) and must be a
multiple of 16 bytes. The layout of the output matrix must be specified as either
mem_row_major or mem_col_major. The values of mptr, ldm, layout and all
template parameters for a must be the same for all threads in the warp. This function
must be called by all threads in the warp, or the result is undefined.
fill_fragment
Fill a matrix fragment with a constant value v. Because the mapping of matrix
elements to each fragment is unspecified, this function is ordinarily called by all
threads in the warp with a common value for v.
mma_sync
Waits until all threads in the warp are converged and then performs the warpsynchronous matrix multiply-accumulate operation D=A*B+C. The in-place operation,
C=A*B+C, is also supported. The value of satf and template parameters for each
matrix fragment must be the same for all threads in the warp. Also, the template
parameters m, n and k must match between fragements A, B, C and D. This function
must be called by all threads in the warp, or the result is undefined.
If satf (saturate to finite value) mode is true, the following additional numerical
properties apply for the destination accumulator:
‣
‣
‣
If an element result is +Infinity, the corresponding accumulator will contain
+MAX_NORM
If an element result is -Infinity, the corresponding accumulator will contain MAX_NORM
If an element result is NaN, the corresponding accumulator will contain +0
Because the map of matrix elements into each thread's fragment is unspecified,
individual matrix elements must be accessed from memory (shared or global) after
calling store_matrix_sync. In the special case where all threads in the warp will
apply an element-wise operation uniformly to all fragment elements, direct element
access can be implemented using the following fragment class members.
enum fragment