Compatible with Cuda (for training, both model weights and gradients need to be in VRAM). Cuda infrastructure is pretty good for AI intense computing. Developers can work offline with their models, they can upload to NVIDIA cloud too.
Jetson Orin Nano Super maintains compatibility with major ML frameworks, such as HuggingFace transformers, Ollama, Llama.cpp, vLLM, MLC and Nvidia TensorRT-LLM.