Hi,
NvTensorRTRTXExecutionProvider appears in find_all_providers() but has an empty library_path suggests that Windows ML recognizes the provider type, but doesn’t actually have a usable DLL for it on your system yet.
As a possible workaround, you might try bypassing the Windows ML auto-download mechanism and working with ONNX Runtime directly:
import onnxruntime as ort
print("ORT version:", ort.__version__)
print("Available providers:", ort.get_available_providers())
print("All providers:", ort.get_all_providers())
If NvTensorRTRTXExecutionProvider shows up only in get_all_providers() but not in get_available_providers(), that would confirm the provider is known by name but not actually present/usable in the runtime. In that situation, one option is to install or build the TensorRT RTX EP manually and then register it with:
ort.register_execution_provider_library(
"NvTensorRTRTXExecutionProvider",
"/path/to/the/ep/dll",
)
For more details on how ONNX Runtime uses TensorRT on the Python side, see the TensorRT Execution Provider docs:
Disclaimer: Some links are non-Microsoft website. The pages appear to be providing accurate, safe information. Watch out for ads on the site that may advertise products frequently classifies as a PUP (Potentially Unwanted Products). Thoroughly research any product advertised on the site before you decide to download and install it.
TensorRT Execution Provider (Python)
This describes how to work directly with TensorrtExecutionProvider in ONNX Runtime, which is a practical workaround if the Windows ML certified EP (NvTensorRTRTXExecutionProvider) isn’t being downloaded properly.