You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
feat(linux): Update ML library versions and documentation
This commit updates the ML library versions and documentation
- Update ARM Compute Library from 24.12 to 52.7.0
- Update Arm NN from 24.11 to 26.01
- Update NNStreamer from 2.4.2 to 2.6.0
- Update ONNX Runtime from 1.20.1 to 1.23.2
- Update TensorFlow Lite from 2.18.0 to 2.20.0
- Refresh all test outputs and benchmark results
- Add ML components to AM62DX documentation TOC
- Update component table with latest library information
Signed-off-by: Pratham Deshmukh <p-deshmukh@ti.com>
Copy file name to clipboardExpand all lines: source/linux/Foundational_Components/Machine_Learning/tflite.rst
+66-28Lines changed: 66 additions & 28 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -18,7 +18,7 @@ It supports on-device inference with low latency and a compact binary size. You
18
18
Features
19
19
********
20
20
21
-
- TensorFlow Lite v2.18.0 via Yocto - `meta-arago-extras/recipes-framework/tensorflow-lite/tensorflow-lite_2.18.0.bb <https://web.git.yoctoproject.org/meta-arago/tree/meta-arago-extras/recipes-framework/tensorflow-lite/tensorflow-lite_2.18.0.bb?h=11.00.09>`__
21
+
- TensorFlow Lite v2.20.0 via Yocto - `meta-arago-extras/recipes-framework/tensorflow-lite/tensorflow-lite_2.20.0.bb <https://web.git.yoctoproject.org/meta-arago/tree/meta-arago-extras/recipes-framework/tensorflow-lite/tensorflow-lite_2.18.0.bb?h=11.00.09>`__
22
22
- Multithreaded computation with acceleration using Arm Neon SIMD instructions on Cortex-A cores
23
23
- C++ Library and Python interpreter (supported Python version 3)
24
24
- TensorFlow Lite Model benchmark Tool (i.e. :command:`benchmark_model`)
@@ -89,23 +89,21 @@ The output of the benchmarking application should be similar to:
INFO: Inference timings in us: Init: 6418, First inference: 1041765, Warmup (avg): 1.04176e+06, Inference (avg): 971535
104
+
INFO: Inference timings in us: Init: 5579, First inference: 1357602, Warmup (avg): 1.3576e+06, Inference (avg): 1.24027e+06
107
105
INFO: Note: as the benchmark tool itself affects memory footprint, the following is only APPROXIMATE to the actual memory footprint of the model at runtime. Take the information at your discretion.
108
-
INFO: Memory footprint delta from the start of the tool (MB): init=6.14844 overall=109.848
106
+
INFO: Memory footprint delta from the start of the tool (MB): init=6.36328 overall=109.832
109
107
110
108
Where,
111
109
@@ -130,26 +128,23 @@ The output of the benchmarking application should be similar to,
INFO: Inference timings in us: Init: 614333, First inference: 905463, Warmup (avg): 905463, Inference (avg): 899641
151
145
INFO: Note: as the benchmark tool itself affects memory footprint, the following is only APPROXIMATE to the actual memory footprint of the model at runtime. Take the information at your discretion.
152
-
INFO: Memory footprint delta from the start of the tool (MB): init=133.086 overall=149.531
146
+
INFO: Memory footprint delta from the start of the tool (MB): init=146.363 overall=150.141
147
+
153
148
154
149
Where,
155
150
@@ -166,14 +161,14 @@ The following performance numbers are captured with :command:`benchmark_model` o
166
161
:header: "SOC", "Delegates", "Inference Time (sec)", "Initialization Time (ms)", "Overall Memory Footprint (MB)"
Based on the above data, using the XNNPACK delegate significantly improves inference times across all SoCs, though it generally increases initialization time and overall memory footprint.
179
174
@@ -185,10 +180,12 @@ Based on the above data, using the XNNPACK delegate significantly improves infer
185
180
Example Applications
186
181
********************
187
182
188
-
|__SDK_FULL_NAME__| has integrated opensource components like NNStreamer which can be used for neural network inferencing using the sample tflite models under :file:`/usr/share/oob-demo-assets/models/`
189
-
Checkout the Object Detection usecase under :ref:`TI Apps Launcher - User Guide <TI-Apps-Launcher-User-Guide-label>`
183
+
.. ifconfig:: CONFIG_part_variant in ('AM62X', 'AM62LX', 'AM62PX')
190
184
191
-
Alternatively, if a display is connected, you can run the Object Detection pipeline using this command,
185
+
|__SDK_FULL_NAME__| has integrated opensource components like NNStreamer which can be used for neural network inferencing using the sample tflite models under :file:`/usr/share/oob-demo-assets/models/`
186
+
Checkout the Object Detection usecase under :ref:`TI Apps Launcher - User Guide <TI-Apps-Launcher-User-Guide-label>`
187
+
188
+
Alternatively, if a display is connected, you can run the Object Detection pipeline using this command,
192
189
193
190
.. ifconfig:: CONFIG_part_variant in ('AM62X', 'AM62LX')
194
191
@@ -248,6 +245,47 @@ Alternatively, if a display is connected, you can run the Object Detection pipel
248
245
249
246
The above GStreamer pipeline reads an H.264 video file, decodes it, and processes it for object detection using a TensorFlow Lite model, displaying bounding boxes around detected objects. The processed video is then composited and rendered on the screen using the ``kmssink`` element.
250
247
248
+
.. ifconfig:: CONFIG_part_variant in ('AM62DX')
249
+
250
+
|__SDK_FULL_NAME__| has integrated opensource components like NNStreamer which can be used for neural network inferencing using the sample TensorFlow Lite models under :file:`/usr/share/oob-demo-assets/models/`
251
+
252
+
If an audio input device is connected, you can run the Audio Classification pipeline using this command:
The above GStreamer pipeline captures real-time audio from an ALSA source, converts it to the required format, and processes it for audio event classification using the YAMNet TensorFlow Lite model. The audio data is aggregated into tensors, normalized for machine learning input, and classified to identify various audio events and sounds. The classification results are decoded to human-readable labels and output to stdout.
288
+
251
289
.. attention::
252
290
253
291
The Example Applications section is not applicable for AM64x
0 commit comments