You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
docs: Update ML library versions and documentation
This commit updates the ML library versions and documentation
- Update ARM Compute Library from 24.12 to 52.7.0
- Update Arm NN from 24.11 to 26.01
- Update NNStreamer from 2.4.2 to 2.6.0
- Update ONNX Runtime from 1.20.1 to 1.23.2
- Update TensorFlow Lite from 2.18.0 to 2.20.0
- Refresh all test outputs and benchmark results
- Add ML components to AM62DX documentation TOC
- Update component table with latest library information
Signed-off-by: Pratham Deshmukh <p-deshmukh@ti.com>
Copy file name to clipboardExpand all lines: source/linux/Foundational_Components/Machine_Learning/tflite.rst
+20-25Lines changed: 20 additions & 25 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -18,7 +18,7 @@ It supports on-device inference with low latency and a compact binary size. You
18
18
Features
19
19
********
20
20
21
-
- TensorFlow Lite v2.18.0 via Yocto - `meta-arago-extras/recipes-framework/tensorflow-lite/tensorflow-lite_2.18.0.bb <https://web.git.yoctoproject.org/meta-arago/tree/meta-arago-extras/recipes-framework/tensorflow-lite/tensorflow-lite_2.18.0.bb?h=11.00.09>`__
21
+
- TensorFlow Lite v2.20.0 via Yocto - `meta-arago-extras/recipes-framework/tensorflow-lite/tensorflow-lite_2.20.0.bb <https://web.git.yoctoproject.org/meta-arago/tree/meta-arago-extras/recipes-framework/tensorflow-lite/tensorflow-lite_2.18.0.bb?h=11.00.09>`__
22
22
- Multithreaded computation with acceleration using Arm Neon SIMD instructions on Cortex-A cores
23
23
- C++ Library and Python interpreter (supported Python version 3)
24
24
- TensorFlow Lite Model benchmark Tool (i.e. :command:`benchmark_model`)
@@ -89,23 +89,21 @@ The output of the benchmarking application should be similar to:
INFO: Inference timings in us: Init: 6418, First inference: 1041765, Warmup (avg): 1.04176e+06, Inference (avg): 971535
104
+
INFO: Inference timings in us: Init: 5579, First inference: 1357602, Warmup (avg): 1.3576e+06, Inference (avg): 1.24027e+06
107
105
INFO: Note: as the benchmark tool itself affects memory footprint, the following is only APPROXIMATE to the actual memory footprint of the model at runtime. Take the information at your discretion.
108
-
INFO: Memory footprint delta from the start of the tool (MB): init=6.14844 overall=109.848
106
+
INFO: Memory footprint delta from the start of the tool (MB): init=6.36328 overall=109.832
109
107
110
108
Where,
111
109
@@ -130,26 +128,23 @@ The output of the benchmarking application should be similar to,
INFO: Inference timings in us: Init: 614333, First inference: 905463, Warmup (avg): 905463, Inference (avg): 899641
151
145
INFO: Note: as the benchmark tool itself affects memory footprint, the following is only APPROXIMATE to the actual memory footprint of the model at runtime. Take the information at your discretion.
152
-
INFO: Memory footprint delta from the start of the tool (MB): init=133.086 overall=149.531
146
+
INFO: Memory footprint delta from the start of the tool (MB): init=146.363 overall=150.141
147
+
153
148
154
149
Where,
155
150
@@ -166,14 +161,14 @@ The following performance numbers are captured with :command:`benchmark_model` o
166
161
:header: "SOC", "Delegates", "Inference Time (sec)", "Initialization Time (ms)", "Overall Memory Footprint (MB)"
Based on the above data, using the XNNPACK delegate significantly improves inference times across all SoCs, though it generally increases initialization time and overall memory footprint.
0 commit comments