Blazing-fast, production-ready YOLO inference for .NET
YoloDotNet is a modular, lightweight C# library for real-time computer vision and YOLO-based inference in .NET.
It provides high-performance inference for modern YOLO model families (YOLOv5u through YOLOv26, YOLO-World, YOLO-E, and RT-DETR), with explicit control over execution, memory, and preprocessing.
Built on .NET 8, ONNX Runtime, and SkiaSharp, YoloDotNet intentionally
avoids heavy computer vision frameworks such as OpenCV.
There is no Python runtime, no hidden preprocessing, and no implicit behavior —
only the components required for fast, predictable inference on Windows,
Linux, and macOS.
No Python. No magic. Just fast, deterministic YOLO — done properly for .NET.
YoloDotNet is designed for developers who need:
- ✅ Pure .NET — no Python runtime, no scripts
- ✅ Real performance — CPU, CUDA / TensorRT, OpenVINO, CoreML, DirectML
- ✅ Explicit configuration — predictable accuracy and memory usage
- ✅ Production readiness — engine caching, long-running stability
- ✅ Multiple vision tasks — detection, OBB, segmentation, pose, classification
Ideal for desktop apps, backend services, and real-time vision pipelines that require deterministic behavior and full control.
- Added Region of Interest (ROI) support, allowing inference to run on selected areas of an image or video stream
(useful for surveillance, monitoring zones, and performance-focused pipelines) - Added the option to draw edges on segmented objects for improved visual clarity
- Added helper methods for JSON export:
ToJson()— convert inference results to JSONSaveJson()— save inference results directly to a JSON file
- Added helper methods for YOLO-formatted annotations:
ToYoloFormat()— convert results to YOLO annotation formatSaveYoloFormat()— save results as YOLO-compatible training data
- Added
GetContourPoints()helper for extracting ordered contour points from segmented objects - Updated YOLOv26 inference execution to align with other tasks, improving consistency and overall execution efficiency
📖 Full release history: CHANGELOG.md
Tip
See the demos
Practical, runnable examples showcasing YoloDotNet features are available in the demo projects:
👉 Browse the demo folder
- For YOLOv26 models, export with opset=18
- For YOLOv5u–YOLOv12, export with opset=17
Important
Using the correct opset ensures optimal compatibility and performance with ONNX Runtime.
For more information on how to export models to ONNX, refer to https://docs.ultralytics.com/modes/export/
Example export commands (Ultralytics CLI):
# For YOLOv5u–YOLOv12 (opset 17)
yolo export model=yolov8n.pt format=onnx opset=17
# For YOLOv26 (opset 18)
yolo export model=yolo26n.pt format=onnx opset=18Warning
Model License Notice:
YoloDotNet is MIT licensed, but most Ultralytics YOLO models are AGPL-3.0 or require a commercial license for commercial use.
You are responsible for ensuring your use of any model complies with its license.
See Ultralytics Model Licensing for details.
dotnet add package YoloDotNet# CPU (recommended starting point)
dotnet add package YoloDotNet.ExecutionProvider.Cpu
# Hardware-accelerated execution (choose one)
dotnet add package YoloDotNet.ExecutionProvider.Cuda
dotnet add package YoloDotNet.ExecutionProvider.OpenVino
dotnet add package YoloDotNet.ExecutionProvider.CoreML
dotnet add package YoloDotNet.ExecutionProvider.DirectML💡 Note: The CUDA execution provider includes optional TensorRT acceleration.
No separate TensorRT package is required.
using SkiaSharp;
using YoloDotNet;
using YoloDotNet.Models;
using YoloDotNet.Extensions;
using YoloDotNet.ExecutionProvider.Cpu;
using var yolo = new Yolo(new YoloOptions
{
ExecutionProvider = new CpuExecutionProvider("model.onnx")
});
using var image = SKBitmap.Decode("image.jpg");
// Note: The IoU parameter is used for NMS-based models.
// For YOLOv10 and YOLOv26, IoU is ignored since post-processing is handled internally by the model.
var results = yolo.RunObjectDetection(image, confidence: 0.25, iou: 0.7);
image.Draw(results);
image.Save("result.jpg");You’re now running YOLO inference in pure C#.
YOLO inference accuracy is not automatic.
Preprocessing settings such as image resize mode, sampling method, and confidence/IoU thresholds must match how the model was trained.
These settings directly control the accuracy–performance tradeoff and should be treated as part of the model itself.
📖 Before tuning models or comparing results, read:
👉 Accuracy & Configuration Guide
| Classification | Object Detection | OBB Detection | Segmentation | Pose Estimation |
|---|---|---|---|---|
![]() |
![]() |
![]() |
![]() |
![]() |
| pexels.com | pexels.com | pexels.com | pexels.com | pexels.com |
The following YOLO models have been tested and verified with YoloDotNet using official Ultralytics exports and default heads.
| Classification | Object Detection | Segmentation | Pose Estimation | OBB Detection |
|---|---|---|---|---|
| YOLOv8-cls YOLOv11-cls YOLOv12-cls YOLOv26-cls |
YOLOv5u YOLOv8 YOLOv9 YOLOv10 YOLOv11 YOLOv12 YOLOv26 RT-DETR |
YOLOv8-seg YOLOv11-seg YOLOv12-seg YOLOv26-seg YOLO-World (v2) |
YOLOv8-pose YOLOv11-pose YOLOv12-pose YOLOv26-pose |
YOLOv8-obb YOLOv11-obb YOLOv12-obb YOLOv26-obb |
Hands-on examples are available in the demo folder:
Includes image inference, video streams, GPU acceleration, segmentation, and large-image workflows.
| Provider | Windows | Linux | macOS | Documentation |
|---|---|---|---|---|
| CPU | ✅ | ✅ | ✅ | CPU README |
| CUDA / TensorRT | ✅ | ✅ | ❌ | CUDA README |
| OpenVINO | ✅ | ✅ | ❌ | OpenVINO README |
| CoreML | ❌ | ❌ | ✅ | CoreML README |
| DirectML | ✅ | ❌ | ❌ | DirectML README |
ℹ️ Only one execution provider package may be referenced.
Mixing providers will cause native runtime conflicts.
YoloDotNet focuses on stable, low-overhead inference where runtime cost is dominated by the execution provider and model.
📊 Benchmarks: /test/YoloDotNet.Benchmarks
- Stable latency after warm-up
- Clean scaling from CPU → GPU → TensorRT
- Predictable allocation behavior
- Suitable for real-time and long-running services
- Core package is provider-agnostic
- Execution providers are separate NuGet packages
- Native ONNX Runtime dependencies are isolated
Why this matters: fewer conflicts, predictable deployment, and production-safe behavior.
⭐ Star the repo
💬 Share feedback
🤝 Sponsor development
MIT License
Copyright (c) Niklas Swärd
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
-
YoloDotNet is licensed under the MIT License and provides an ONNX inference engine for YOLO models exported using Ultralytics YOLO tooling.
-
This project does not include, distribute, download, or bundle any pretrained models.
-
Users must supply their own ONNX models.
-
YOLO ONNX models produced using Ultralytics tooling are typically licensed under AGPL-3.0 or a separate commercial license from Ultralytics.
-
YoloDotNet does not impose, modify, or transfer any license terms related to user-supplied models.
-
Users are solely responsible for ensuring that their use of any model complies with the applicable license terms, including requirements related to commercial use, distribution, or network deployment.




