i use jetson orin_nano run it. jetpack6.2 tensorrt10.3
run ./trtexec --onnx=/home/nvidia/nano_sam/nanosam-main/data/mobile_sam_mask_decoder.onnx --saveEngine=/home/nvidia/nano_sam/nanosam-main/data/mobile_sam_mask_decoder.engine --minShapes=point_coords:1x1x2,point_labels:1x1 --optShapes=point_coords:1x1x2,point_labels:1x1 --maxShapes=point_coords:1x10x2,point_labels:1x10
&&&& RUNNING TensorRT.trtexec [TensorRT v100300] # ./trtexec --onnx=/home/nvidia/nano_sam/nanosam-main/data/mobile_sam_mask_decoder.onnx --saveEngine=/home/nvidia/nano_sam/nanosam-main/data/mobile_sam_mask_decoder.engine --minShapes=point_coords:1x1x2,point_labels:1x1 --optShapes=point_coords:1x1x2,point_labels:1x1 --maxShapes=point_coords:1x10x2,point_labels:1x1
error is:
[07/25/2025-17:50:05] [I] TensorRT version: 10.3.0
[07/25/2025-17:50:05] [I] Loading standard plugins
[07/25/2025-17:50:05] [I] [TRT] [MemUsageChange] Init CUDA: CPU +2, GPU +0, now: CPU 31, GPU 3130 (MiB)
[07/25/2025-17:50:09] [I] [TRT] [MemUsageChange] Init builder kernel library: CPU +928, GPU +749, now: CPU 1002, GPU 3928 (MiB)
[07/25/2025-17:50:09] [I] Start parsing network model.
[07/25/2025-17:50:09] [I] [TRT] ----------------------------------------------------------------
[07/25/2025-17:50:09] [I] [TRT] Input filename: /home/nvidia/nano_sam/nanosam-main/data/mobile_sam_mask_decoder.onnx
[07/25/2025-17:50:09] [I] [TRT] ONNX IR version: 0.0.8
[07/25/2025-17:50:09] [I] [TRT] Opset version: 16
[07/25/2025-17:50:09] [I] [TRT] Producer name: pytorch
[07/25/2025-17:50:09] [I] [TRT] Producer version: 2.8.0
[07/25/2025-17:50:09] [I] [TRT] Domain:
[07/25/2025-17:50:09] [I] [TRT] Model version: 0
[07/25/2025-17:50:09] [I] [TRT] Doc string:
[07/25/2025-17:50:09] [I] [TRT] ----------------------------------------------------------------
[07/25/2025-17:50:09] [E] Error[4]: ITensor::getDimensions: Error Code 4: Internal Error (/OneHot: an IIOneHotLayer cannot be used to compute a shape tensor)
[07/25/2025-17:50:09] [E] [TRT] ModelImporter.cpp:948: While parsing node number 146 [Tile -> "/Tile_output_0"]:
[07/25/2025-17:50:09] [E] [TRT] ModelImporter.cpp:950: --- Begin node ---
input: "/Unsqueeze_3_output_0"
input: "/Reshape_2_output_0"
output: "/Tile_output_0"
name: "/Tile"
op_type: "Tile"
[07/25/2025-17:50:09] [E] [TRT] ModelImporter.cpp:951: --- End node ---
[07/25/2025-17:50:09] [E] [TRT] ModelImporter.cpp:953: ERROR: ModelImporter.cpp:195 In function parseNode:
[6] Invalid Node - /Tile
ITensor::getDimensions: Error Code 4: Internal Error (/OneHot: an IIOneHotLayer cannot be used to compute a shape tensor)
[07/25/2025-17:50:09] [E] Failed to parse onnx file
[07/25/2025-17:50:09] [I] Finished parsing network model. Parse time: 0.0439989
[07/25/2025-17:50:09] [E] Parsing model failed
[07/25/2025-17:50:09] [E] Failed to create engine from model or file.
[07/25/2025-17:50:09] [E] Engine set up failed
i use jetson orin_nano run it. jetpack6.2 tensorrt10.3
run ./trtexec --onnx=/home/nvidia/nano_sam/nanosam-main/data/mobile_sam_mask_decoder.onnx --saveEngine=/home/nvidia/nano_sam/nanosam-main/data/mobile_sam_mask_decoder.engine --minShapes=point_coords:1x1x2,point_labels:1x1 --optShapes=point_coords:1x1x2,point_labels:1x1 --maxShapes=point_coords:1x10x2,point_labels:1x10
&&&& RUNNING TensorRT.trtexec [TensorRT v100300] # ./trtexec --onnx=/home/nvidia/nano_sam/nanosam-main/data/mobile_sam_mask_decoder.onnx --saveEngine=/home/nvidia/nano_sam/nanosam-main/data/mobile_sam_mask_decoder.engine --minShapes=point_coords:1x1x2,point_labels:1x1 --optShapes=point_coords:1x1x2,point_labels:1x1 --maxShapes=point_coords:1x10x2,point_labels:1x1
error is:
[07/25/2025-17:50:05] [I] TensorRT version: 10.3.0
[07/25/2025-17:50:05] [I] Loading standard plugins
[07/25/2025-17:50:05] [I] [TRT] [MemUsageChange] Init CUDA: CPU +2, GPU +0, now: CPU 31, GPU 3130 (MiB)
[07/25/2025-17:50:09] [I] [TRT] [MemUsageChange] Init builder kernel library: CPU +928, GPU +749, now: CPU 1002, GPU 3928 (MiB)
[07/25/2025-17:50:09] [I] Start parsing network model.
[07/25/2025-17:50:09] [I] [TRT] ----------------------------------------------------------------
[07/25/2025-17:50:09] [I] [TRT] Input filename: /home/nvidia/nano_sam/nanosam-main/data/mobile_sam_mask_decoder.onnx
[07/25/2025-17:50:09] [I] [TRT] ONNX IR version: 0.0.8
[07/25/2025-17:50:09] [I] [TRT] Opset version: 16
[07/25/2025-17:50:09] [I] [TRT] Producer name: pytorch
[07/25/2025-17:50:09] [I] [TRT] Producer version: 2.8.0
[07/25/2025-17:50:09] [I] [TRT] Domain:
[07/25/2025-17:50:09] [I] [TRT] Model version: 0
[07/25/2025-17:50:09] [I] [TRT] Doc string:
[07/25/2025-17:50:09] [I] [TRT] ----------------------------------------------------------------
[07/25/2025-17:50:09] [E] Error[4]: ITensor::getDimensions: Error Code 4: Internal Error (/OneHot: an IIOneHotLayer cannot be used to compute a shape tensor)
[07/25/2025-17:50:09] [E] [TRT] ModelImporter.cpp:948: While parsing node number 146 [Tile -> "/Tile_output_0"]:
[07/25/2025-17:50:09] [E] [TRT] ModelImporter.cpp:950: --- Begin node ---
input: "/Unsqueeze_3_output_0"
input: "/Reshape_2_output_0"
output: "/Tile_output_0"
name: "/Tile"
op_type: "Tile"
[07/25/2025-17:50:09] [E] [TRT] ModelImporter.cpp:951: --- End node ---
[07/25/2025-17:50:09] [E] [TRT] ModelImporter.cpp:953: ERROR: ModelImporter.cpp:195 In function parseNode:
[6] Invalid Node - /Tile
ITensor::getDimensions: Error Code 4: Internal Error (/OneHot: an IIOneHotLayer cannot be used to compute a shape tensor)
[07/25/2025-17:50:09] [E] Failed to parse onnx file
[07/25/2025-17:50:09] [I] Finished parsing network model. Parse time: 0.0439989
[07/25/2025-17:50:09] [E] Parsing model failed
[07/25/2025-17:50:09] [E] Failed to create engine from model or file.
[07/25/2025-17:50:09] [E] Engine set up failed