-
-
Notifications
You must be signed in to change notification settings - Fork 16.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Inferencing using YOLO with OpenVINO format #7084
Comments
@Averen19 I don't have NCS experience myself, but we've run OpenVINO on a variety of CPU backends with excellent results. It's usually one of the fastest export formats. See #6613 for details: Run YOLOv5 benchmarks on a PyTorch model for all supported export formats. Currently operates on CPU, future updates will implement GPU inference.
Colab++ V100 High-RAM CPU Results
Colab++ A100 High-RAM CPU Results
MacOS Intel CPU Results (CoreML-capable)
Ultralytics Hyperplane EPYC Milan AMD CPU Results
Resolves #6586 |
👋 Hello, this issue has been automatically marked as stale because it has not had recent activity. Please note it will be closed if no further activity occurs. Access additional YOLOv5 🚀 resources:
Access additional Ultralytics ⚡ resources:
Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed! Thank you for your contributions to YOLOv5 🚀 and Vision AI ⭐! |
Search before asking
Question
Does anyone know any resources or code that can do the inferencing using the openVINO toolkit with the converted IR files?
I would like to detect objects using the NCS 2 on the Raspberry Pi 4.
Additional
No response
The text was updated successfully, but these errors were encountered: