Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 4 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,8 @@
# TensorRT OSS Release Changelog
## 10.16.1 GA - 2026-4-13

- This is a bugfix release with no major new features. See the [release notes](https://docs.nvidia.com/deeplearning/tensorrt/latest/getting-started/release-notes-10/10.16.1.html) for more details.

## 10.16 GA - 2026-3-24

- General
Expand Down
18 changes: 9 additions & 9 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,7 @@ To build the TensorRT-OSS components, you will first need the following software

**TensorRT GA build**

- TensorRT v10.16.0.72
- TensorRT v10.16.1.11
- Available from direct download links listed below

**System Packages**
Expand Down Expand Up @@ -98,24 +98,24 @@ To build the TensorRT-OSS components, you will first need the following software

Else download and extract the TensorRT GA build from [NVIDIA Developer Zone](https://developer.nvidia.com) with the direct links below:

- [TensorRT 10.16.0.72 for CUDA 13.2, Linux x86_64](https://developer.nvidia.com/downloads/compute/machine-learning/tensorrt/10.16.0/tars/TensorRT-10.16.0.72.Linux.x86_64-gnu.cuda-13.2.tar.gz)
- [TensorRT 10.16.0.72 for CUDA 12.9, Linux x86_64](https://developer.nvidia.com/downloads/compute/machine-learning/tensorrt/10.16.0/tars/TensorRT-10.16.0.72.Linux.x86_64-gnu.cuda-12.9.tar.gz)
- [TensorRT 10.16.0.72 for CUDA 13.2, Windows x86_64](https://developer.nvidia.com/downloads/compute/machine-learning/tensorrt/10.16.0/zip/TensorRT-10.16.0.72.Windows.win10.cuda-13.2.zip)
- [TensorRT 10.16.0.72 for CUDA 12.9, Windows x86_64](https://developer.nvidia.com/downloads/compute/machine-learning/tensorrt/10.16.0/zip/TensorRT-10.16.0.72.Windows.win10.cuda-12.9.zip)
- [TensorRT 10.16.1.11 for CUDA 13.2, Linux x86_64](https://developer.nvidia.com/downloads/compute/machine-learning/tensorrt/10.16.1/tars/TensorRT-10.16.1.11.Linux.x86_64-gnu.cuda-13.2.tar.gz)
- [TensorRT 10.16.1.11 for CUDA 12.9, Linux x86_64](https://developer.nvidia.com/downloads/compute/machine-learning/tensorrt/10.16.1/tars/TensorRT-10.16.1.11.Linux.x86_64-gnu.cuda-12.9.tar.gz)
- [TensorRT 10.16.1.11 for CUDA 13.2, Windows x86_64](https://developer.nvidia.com/downloads/compute/machine-learning/tensorrt/10.16.1/zip/TensorRT-10.16.1.11.Windows.amd64.cuda-13.2.zip)
- [TensorRT 10.16.1.11 for CUDA 12.9, Windows x86_64](https://developer.nvidia.com/downloads/compute/machine-learning/tensorrt/10.16.1/zip/TensorRT-10.16.1.11.Windows.amd64.cuda-12.9.zip)

**Example: Ubuntu 22.04 on x86-64 with cuda-13.2**

```bash
cd ~/Downloads
tar -xvzf TensorRT-10.16.0.72.Linux.x86_64-gnu.cuda-13.2.tar.gz
export TRT_LIBPATH=`pwd`/TensorRT-10.16.0.72/lib
tar -xvzf TensorRT-10.16.1.11.Linux.x86_64-gnu.cuda-13.2.tar.gz
export TRT_LIBPATH=`pwd`/TensorRT-10.16.1.11/lib
```

**Example: Windows on x86-64 with cuda-12.9**

```powershell
Expand-Archive -Path TensorRT-10.16.0.72.Windows.win10.cuda-12.9.zip
$env:TRT_LIBPATH="$pwd\TensorRT-10.16.0.72\lib"
Expand-Archive -Path TensorRT-10.16.1.11.Windows.amd64.cuda-12.9.zip
$env:TRT_LIBPATH="$pwd\TensorRT-10.16.1.11\lib"
```

## Setting Up The Build Environment
Expand Down
2 changes: 1 addition & 1 deletion VERSION
Original file line number Diff line number Diff line change
@@ -1 +1 @@
10.16.0.72
10.16.1.11
18 changes: 9 additions & 9 deletions docker/rockylinux8.Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ ARG CUDA_VERSION=13.2.0
FROM nvidia/cuda:${CUDA_VERSION}-devel-rockylinux8
LABEL maintainer="NVIDIA CORPORATION"

ENV TRT_VERSION 10.16.0.72
ENV TRT_VERSION 10.16.1.11
SHELL ["/bin/bash", "-c"]

# Setup user account
Expand Down Expand Up @@ -55,15 +55,15 @@ RUN dnf install -y python38 python38-devel &&\

# Install TensorRT
RUN if [ "${CUDA_VERSION:0:2}" = "13" ]; then \
wget https://developer.nvidia.com/downloads/compute/machine-learning/tensorrt/10.16.0/tars/TensorRT-10.16.0.72.Linux.x86_64-gnu.cuda-13.2.tar.gz \
&& tar -xf TensorRT-10.16.0.72.Linux.x86_64-gnu.cuda-13.2.tar.gz \
&& cp -a TensorRT-10.16.0.72/lib/*.so* /usr/lib64 \
&& pip install TensorRT-10.16.0.72/python/tensorrt-10.16.0.72-cp38-none-linux_x86_64.whl ;\
wget https://developer.nvidia.com/downloads/compute/machine-learning/tensorrt/10.16.1/tars/TensorRT-10.16.1.11.Linux.x86_64-gnu.cuda-13.2.tar.gz \
&& tar -xf TensorRT-10.16.1.11.Linux.x86_64-gnu.cuda-13.2.tar.gz \
&& cp -a TensorRT-10.16.1.11/lib/*.so* /usr/lib64 \
&& pip install TensorRT-10.16.1.11/python/tensorrt-10.16.1.11-cp38-none-linux_x86_64.whl ;\
elif [ "${CUDA_VERSION:0:2}" = "12" ]; then \
wget https://developer.nvidia.com/downloads/compute/machine-learning/tensorrt/10.16.0/tars/TensorRT-10.16.0.72.Linux.x86_64-gnu.cuda-12.9.tar.gz \
&& tar -xf TensorRT-10.16.0.72.Linux.x86_64-gnu.cuda-12.9.tar.gz \
&& cp -a TensorRT-10.16.0.72/lib/*.so* /usr/lib64 \
&& pip install TensorRT-10.16.0.72/python/tensorrt-10.16.0.72-cp38-none-linux_x86_64.whl ;\
wget https://developer.nvidia.com/downloads/compute/machine-learning/tensorrt/10.16.1/tars/TensorRT-10.16.1.11.Linux.x86_64-gnu.cuda-12.9.tar.gz \
&& tar -xf TensorRT-10.16.1.11.Linux.x86_64-gnu.cuda-12.9.tar.gz \
&& cp -a TensorRT-10.16.1.11/lib/*.so* /usr/lib64 \
&& pip install TensorRT-10.16.1.11/python/tensorrt-10.16.1.11-cp38-none-linux_x86_64.whl ;\
else \
echo "Invalid CUDA_VERSION"; \
exit 1; \
Expand Down
18 changes: 9 additions & 9 deletions docker/rockylinux9.Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ ARG CUDA_VERSION=13.2.0
FROM nvidia/cuda:${CUDA_VERSION}-devel-rockylinux9
LABEL maintainer="NVIDIA CORPORATION"

ENV TRT_VERSION 10.16.0.72
ENV TRT_VERSION 10.16.1.11
SHELL ["/bin/bash", "-c"]

# Setup user account
Expand Down Expand Up @@ -60,15 +60,15 @@ RUN dnf -y install \

# Install TensorRT
RUN if [ "${CUDA_VERSION:0:2}" = "13" ]; then \
wget https://developer.nvidia.com/downloads/compute/machine-learning/tensorrt/10.16.0/tars/TensorRT-10.16.0.72.Linux.x86_64-gnu.cuda-13.2.tar.gz \
&& tar -xf TensorRT-10.16.0.72.Linux.x86_64-gnu.cuda-13.2.tar.gz \
&& cp -a TensorRT-10.16.0.72/lib/*.so* /usr/lib64 \
&& pip install TensorRT-10.16.0.72/python/tensorrt-10.16.0.72-cp39-none-linux_x86_64.whl ;\
wget https://developer.nvidia.com/downloads/compute/machine-learning/tensorrt/10.16.1/tars/TensorRT-10.16.1.11.Linux.x86_64-gnu.cuda-13.2.tar.gz \
&& tar -xf TensorRT-10.16.1.11.Linux.x86_64-gnu.cuda-13.2.tar.gz \
&& cp -a TensorRT-10.16.1.11/lib/*.so* /usr/lib64 \
&& pip install TensorRT-10.16.1.11/python/tensorrt-10.16.1.11-cp39-none-linux_x86_64.whl ;\
elif [ "${CUDA_VERSION:0:2}" = "12" ]; then \
wget https://developer.nvidia.com/downloads/compute/machine-learning/tensorrt/10.16.0/tars/TensorRT-10.16.0.72.Linux.x86_64-gnu.cuda-12.9.tar.gz \
&& tar -xf TensorRT-10.16.0.72.Linux.x86_64-gnu.cuda-12.9.tar.gz \
&& cp -a TensorRT-10.16.0.72/lib/*.so* /usr/lib64 \
&& pip install TensorRT-10.16.0.72/python/tensorrt-10.16.0.72-cp39-none-linux_x86_64.whl ;\
wget https://developer.nvidia.com/downloads/compute/machine-learning/tensorrt/10.16.1/tars/TensorRT-10.16.1.11.Linux.x86_64-gnu.cuda-12.9.tar.gz \
&& tar -xf TensorRT-10.16.1.11.Linux.x86_64-gnu.cuda-12.9.tar.gz \
&& cp -a TensorRT-10.16.1.11/lib/*.so* /usr/lib64 \
&& pip install TensorRT-10.16.1.11/python/tensorrt-10.16.1.11-cp39-none-linux_x86_64.whl ;\
else \
echo "Invalid CUDA_VERSION"; \
exit 1; \
Expand Down
18 changes: 9 additions & 9 deletions docker/ubuntu-22.04.Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ ARG CUDA_VERSION=13.2.0
FROM nvidia/cuda:${CUDA_VERSION}-devel-ubuntu22.04
LABEL maintainer="NVIDIA CORPORATION"

ENV TRT_VERSION 10.16.0.72
ENV TRT_VERSION 10.16.1.11
SHELL ["/bin/bash", "-c"]

# Setup user account
Expand Down Expand Up @@ -73,15 +73,15 @@ RUN apt-get install -y --no-install-recommends \

# Install TensorRT
RUN if [ "${CUDA_VERSION:0:2}" = "13" ]; then \
wget https://developer.nvidia.com/downloads/compute/machine-learning/tensorrt/10.16.0/tars/TensorRT-10.16.0.72.Linux.x86_64-gnu.cuda-13.2.tar.gz \
&& tar -xf TensorRT-10.16.0.72.Linux.x86_64-gnu.cuda-13.2.tar.gz \
&& cp -a TensorRT-10.16.0.72/lib/*.so* /usr/lib64 \
&& pip install TensorRT-10.16.0.72/python/tensorrt-10.16.0.72-cp310-none-linux_x86_64.whl ;\
wget https://developer.nvidia.com/downloads/compute/machine-learning/tensorrt/10.16.1/tars/TensorRT-10.16.1.11.Linux.x86_64-gnu.cuda-13.2.tar.gz \
&& tar -xf TensorRT-10.16.1.11.Linux.x86_64-gnu.cuda-13.2.tar.gz \
&& cp -a TensorRT-10.16.1.11/lib/*.so* /usr/lib/x86_64-linux-gnu \
&& pip install TensorRT-10.16.1.11/python/tensorrt-10.16.1.11-cp310-none-linux_x86_64.whl ;\
elif [ "${CUDA_VERSION:0:2}" = "12" ]; then \
wget https://developer.nvidia.com/downloads/compute/machine-learning/tensorrt/10.16.0/tars/TensorRT-10.16.0.72.Linux.x86_64-gnu.cuda-12.9.tar.gz \
&& tar -xf TensorRT-10.16.0.72.Linux.x86_64-gnu.cuda-12.9.tar.gz \
&& cp -a TensorRT-10.16.0.72/lib/*.so* /usr/lib64 \
&& pip install TensorRT-10.16.0.72/python/tensorrt-10.16.0.72-cp310-none-linux_x86_64.whl ;\
wget https://developer.nvidia.com/downloads/compute/machine-learning/tensorrt/10.16.1/tars/TensorRT-10.16.1.11.Linux.x86_64-gnu.cuda-12.9.tar.gz \
&& tar -xf TensorRT-10.16.1.11.Linux.x86_64-gnu.cuda-12.9.tar.gz \
&& cp -a TensorRT-10.16.1.11/lib/*.so* /usr/lib/x86_64-linux-gnu \
&& pip install TensorRT-10.16.1.11/python/tensorrt-10.16.1.11-cp310-none-linux_x86_64.whl ;\
else \
echo "Invalid CUDA_VERSION"; \
exit 1; \
Expand Down
18 changes: 9 additions & 9 deletions docker/ubuntu-24.04-aarch64.Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ ARG CUDA_VERSION=13.2.0
# Multi-arch container support available in non-cudnn containers.
FROM nvidia/cuda:${CUDA_VERSION}-devel-ubuntu24.04

ENV TRT_VERSION 10.16.0.72
ENV TRT_VERSION 10.16.1.11
SHELL ["/bin/bash", "-c"]

# Setup user account and edit default account
Expand Down Expand Up @@ -83,15 +83,15 @@ ENV PATH="/opt/venv/bin:$PATH"

# Install TensorRT
RUN if [ "${CUDA_VERSION:0:2}" = "13" ]; then \
wget https://developer.nvidia.com/downloads/compute/machine-learning/tensorrt/10.16.0/tars/TensorRT-10.16.0.72.Linux.aarch64-gnu.cuda-13.2.tar.gz \
&& tar -xf TensorRT-10.16.0.72.Linux.aarch64-gnu.cuda-13.2.tar.gz \
&& cp -a TensorRT-10.16.0.72/lib/*.so* /usr/lib/aarch64-linux-gnu/ \
&& pip install TensorRT-10.16.0.72/python/tensorrt-10.16.0.72-cp312-none-linux_aarch64.whl ;\
wget https://developer.nvidia.com/downloads/compute/machine-learning/tensorrt/10.16.1/tars/TensorRT-10.16.1.11.Linux.aarch64-gnu.cuda-13.2.tar.gz \
&& tar -xf TensorRT-10.16.1.11.Linux.aarch64-gnu.cuda-13.2.tar.gz \
&& cp -a TensorRT-10.16.1.11/lib/*.so* /usr/lib/aarch64-linux-gnu/ \
&& pip install TensorRT-10.16.1.11/python/tensorrt-10.16.1.11-cp312-none-linux_aarch64.whl ;\
elif [ "${CUDA_VERSION:0:2}" = "12" ]; then \
wget https://developer.nvidia.com/downloads/compute/machine-learning/tensorrt/10.16.0/tars/TensorRT-10.16.0.72.Linux.aarch64-gnu.cuda-12.9.tar.gz \
&& tar -xf TensorRT-10.16.0.72.Linux.aarch64-gnu.cuda-12.9.tar.gz \
&& cp -a TensorRT-10.16.0.72/lib/*.so* /usr/lib/aarch64-linux-gnu/ \
&& pip install TensorRT-10.16.0.72/python/tensorrt-10.16.0.72-cp312-none-linux_aarch64.whl ;\
wget https://developer.nvidia.com/downloads/compute/machine-learning/tensorrt/10.16.1/tars/TensorRT-10.16.1.11.Linux.aarch64-gnu.cuda-12.9.tar.gz \
&& tar -xf TensorRT-10.16.1.11.Linux.aarch64-gnu.cuda-12.9.tar.gz \
&& cp -a TensorRT-10.16.1.11/lib/*.so* /usr/lib/aarch64-linux-gnu/ \
&& pip install TensorRT-10.16.1.11/python/tensorrt-10.16.1.11-cp312-none-linux_aarch64.whl ;\
else \
echo "Invalid CUDA_VERSION"; \
exit 1; \
Expand Down
18 changes: 9 additions & 9 deletions docker/ubuntu-24.04.Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ ARG CUDA_VERSION=13.2.0
FROM nvidia/cuda:${CUDA_VERSION}-devel-ubuntu24.04
LABEL maintainer="NVIDIA CORPORATION"

ENV TRT_VERSION=10.16.0.72
ENV TRT_VERSION=10.16.1.11
SHELL ["/bin/bash", "-c"]

# Setup user account and edit default account
Expand Down Expand Up @@ -82,15 +82,15 @@ ENV PATH="/opt/venv/bin:$PATH"

# Install TensorRT
RUN if [ "${CUDA_VERSION:0:2}" = "13" ]; then \
wget https://developer.nvidia.com/downloads/compute/machine-learning/tensorrt/10.16.0/tars/TensorRT-10.16.0.72.Linux.x86_64-gnu.cuda-13.2.tar.gz \
&& tar -xf TensorRT-10.16.0.72.Linux.x86_64-gnu.cuda-13.2.tar.gz \
&& cp -a TensorRT-10.16.0.72/lib/*.so* /usr/lib/x86_64-linux-gnu/ \
&& pip install TensorRT-10.16.0.72/python/tensorrt-10.16.0.72-cp312-none-linux_x86_64.whl ;\
wget https://developer.nvidia.com/downloads/compute/machine-learning/tensorrt/10.16.1/tars/TensorRT-10.16.1.11.Linux.x86_64-gnu.cuda-13.2.tar.gz \
&& tar -xf TensorRT-10.16.1.11.Linux.x86_64-gnu.cuda-13.2.tar.gz \
&& cp -a TensorRT-10.16.1.11/lib/*.so* /usr/lib/x86_64-linux-gnu/ \
&& pip install TensorRT-10.16.1.11/python/tensorrt-10.16.1.11-cp312-none-linux_x86_64.whl ;\
elif [ "${CUDA_VERSION:0:2}" = "12" ]; then \
wget https://developer.nvidia.com/downloads/compute/machine-learning/tensorrt/10.16.0/tars/TensorRT-10.16.0.72.Linux.x86_64-gnu.cuda-12.9.tar.gz \
&& tar -xf TensorRT-10.16.0.72.Linux.x86_64-gnu.cuda-12.9.tar.gz \
&& cp -a TensorRT-10.16.0.72/lib/*.so* /usr/lib/x86_64-linux-gnu/ \
&& pip install TensorRT-10.16.0.72/python/tensorrt-10.16.0.72-cp312-none-linux_x86_64.whl ;\
wget https://developer.nvidia.com/downloads/compute/machine-learning/tensorrt/10.16.1/tars/TensorRT-10.16.1.11.Linux.x86_64-gnu.cuda-12.9.tar.gz \
&& tar -xf TensorRT-10.16.1.11.Linux.x86_64-gnu.cuda-12.9.tar.gz \
&& cp -a TensorRT-10.16.1.11/lib/*.so* /usr/lib/x86_64-linux-gnu/ \
&& pip install TensorRT-10.16.1.11/python/tensorrt-10.16.1.11-cp312-none-linux_x86_64.whl ;\
else \
echo "Invalid CUDA_VERSION"; \
exit 1; \
Expand Down
8 changes: 4 additions & 4 deletions docker/ubuntu-cross-aarch64.Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ ARG OS_VERSION=24.04
FROM nvidia/cuda:${CUDA_VERSION}-devel-ubuntu${OS_VERSION}
LABEL maintainer="NVIDIA CORPORATION"

ENV TRT_VERSION 10.16.0.72
ENV TRT_VERSION 10.16.1.11
ENV DEBIAN_FRONTEND=noninteractive

# Setup user account and edit default account
Expand Down Expand Up @@ -87,9 +87,9 @@ RUN wget https://developer.download.nvidia.com/compute/cuda/13.2.0/local_install

# Unpack libnvinfer.

RUN wget https://developer.nvidia.com/downloads/compute/machine-learning/tensorrt/10.16.0/tars/TensorRT-10.16.0.72.Linux.aarch64-gnu.cuda-13.1.tar.gz && \
tar -xf TensorRT-10.16.0.72.Linux.aarch64-gnu.cuda-13.1.tar.gz && \
cp -a TensorRT-10.16.0.72/lib/*.so* /usr/lib/aarch64-linux-gnu
RUN wget https://developer.nvidia.com/downloads/compute/machine-learning/tensorrt/10.16.1/tars/TensorRT-10.16.1.11.Linux.aarch64-gnu.cuda-13.2.tar.gz && \
tar -xf TensorRT-10.16.1.11.Linux.aarch64-gnu.cuda-13.2.tar.gz && \
cp -a TensorRT-10.16.1.11/lib/*.so* /usr/lib/aarch64-linux-gnu

# Link required library
RUN cd /usr/aarch64-linux-gnu/lib && ln -sf librt.so.1 librt.so
Expand Down
4 changes: 2 additions & 2 deletions include/NvInferVersion.h
Original file line number Diff line number Diff line change
Expand Up @@ -25,8 +25,8 @@

#define TRT_MAJOR_ENTERPRISE 10
#define TRT_MINOR_ENTERPRISE 16
#define TRT_PATCH_ENTERPRISE 0
#define TRT_BUILD_ENTERPRISE 72
#define TRT_PATCH_ENTERPRISE 1
#define TRT_BUILD_ENTERPRISE 11
#define NV_TENSORRT_MAJOR TRT_MAJOR_ENTERPRISE //!< TensorRT major version.
#define NV_TENSORRT_MINOR TRT_MINOR_ENTERPRISE //!< TensorRT minor version.
#define NV_TENSORRT_PATCH TRT_PATCH_ENTERPRISE //!< TensorRT patch version.
Expand Down
Loading