TensorRT 4.0.1 Tensor RT Developer Guide
TensorRT-Developer-Guide
User Manual:
Open the PDF directly: View PDF
Page Count: 98
- Table of Contents
- What Is TensorRT?
- TensorRT Tasks
- 2.1. Initializing TensorRT in C++
- 2.2. Creating A Network Definition In C++
- 2.3. Creating A Network Using The C++ API
- 2.4. Building An Engine In C++
- 2.5. Serializing A Model In C++
- 2.6. Performing Inference In C++
- 2.7. Memory Management In C++
- 2.8. Initializing TensorRT in Python
- 2.9. Creating A Network Definition In Python
- 2.10. Creating A Network Using The Python API
- 2.11. Building An Engine In Python
- 2.12. Serializing A Model In Python
- 2.13. Performing Inference In Python
- 2.14. Extending TensorRT With Custom Layers
- 2.15. Working With Mixed Precision
- 2.16. Deploying A TensorRT Optimized Model
- Working With Deep Learning Frameworks
- 3.1. Supported Operations
- 3.2. Working With TensorFlow
- 3.2.1. Freezing A TensorFlow Graph
- 3.2.2. Freezing A Keras Model
- 3.2.3. Converting A Frozen Graph To UFF
- 3.2.4. Working With TensorFlow RNN Weights
- 3.2.4.1. TensorFlow RNN Cells Supported In TensorRT
- 3.2.4.2. Maintaining Model Consistency Between TensorFlow And TensorRT
- 3.2.4.3. Workflow
- 3.2.4.4. Dumping The TensorFlow Weights
- 3.2.4.5. Loading Dumped Weights
- 3.2.4.6. Converting The Weights To A TensorRT Format
- 3.2.4.7. BasicLSTMCell Example
- 3.2.4.8. Setting The Converted Weights And Biases
- 3.3. Working With PyTorch And Other Frameworks
- 3.4. Working With The TensorRT Lite Engine
- Samples
- 4.1. sampleMNIST
- 4.2. sampleMNISTAPI
- 4.3. sampleUffMNIST
- 4.4. sampleOnnxMNIST
- 4.5. sampleGoogleNet
- 4.6. sampleCharRNN
- 4.7. sampleINT8
- 4.8. samplePlugin
- 4.9. sampleNMT
- 4.10. sampleFasterRCNN
- 4.11. sampleUffSSD
- 4.12. sampleMovieLens
- 4.13. lite_examples
- 4.14. pytorch_to_trt
- 4.15. resnet_as_a_service
- 4.16. sample_onnx
- 4.17. tf_to_trt
- Troubleshooting
- Appendix