🛠 A lite C++ toolkit of awesome AI models, support ONNXRuntime, MNN, TNN, NCNN and TensorRT.
-
Updated
Sep 19, 2024 - C++
🛠 A lite C++ toolkit of awesome AI models, support ONNXRuntime, MNN, TNN, NCNN and TensorRT.
Deep Learning API and Server in C++14 support for Caffe, PyTorch,TensorRT, Dlib, NCNN, Tensorflow, XGBoost and TSNE
FastFlowNet: A Lightweight Network for Fast Optical Flow Estimation (ICRA 2021)
BEVDet implemented by TensorRT, C++; Achieving real-time performance on Orin
Deploy stable diffusion model with onnx/tenorrt + tritonserver
NVIDIA-accelerated DNN model inference ROS 2 packages using NVIDIA Triton/TensorRT for both Jetson and x86_64 with CUDA-capable GPU
Yolov5 TensorRT Implementations
Production-ready YOLO8 Segmentation deployment with TensorRT and ONNX support for CPU/GPU, including AI model integration guidance for Unitlab Annotate.
this is a tensorrt version unet, inspired by tensorrtx
you can use dbnet to detect word or bar code,Knowledge Distillation is provided,also python tensorrt inference is provided.
Using TensorRT for Inference Model Deployment.
VitPose without MMCV dependencies
Based on tensorrt v8.0+, deploy detect, pose, segment, tracking of YOLOv8 with C++ and python api.
ComfyUI Depth Anything (v1/v2) Tensorrt Custom Node (up to 14x faster)
3d object detection model smoke c++ inference code
Export (from Onnx) and Inference TensorRT engine with C++.
Convert yolo models to ONNX, TensorRT add NMSBatched.
Анализ трафика на круговом движении с использованием компьютерного зрения
An object tracking project with YOLOv5-v5.0 and Deepsort, speed up by C++ and TensorRT.
Add a description, image, and links to the tensorrt-inference topic page so that developers can more easily learn about it.
To associate your repository with the tensorrt-inference topic, visit your repo's landing page and select "manage topics."