Senior Software Engineer - NIM Factory Container and Cloud Infrastructure

NVIDIA Santa Clara, CA $184,000 - $356,500
Full Time Senior Level 10+ years

Posted 2 weeks ago

Interested in this position?

Upload your resume and we'll match you with this and other relevant opportunities.

Upload Your Resume

About This Role

This Senior Software Engineer will design and implement the core container strategy for NVIDIA Inference Microservices (NIMs) and hosted services, building enterprise-grade software and tooling for container build, packaging, and deployment. The role focuses on improving reliability, performance, and scale across thousands of GPUs, with work expanding into disaggregated LLM inference and emerging deployment patterns.

Responsibilities

  • Design, build, and harden containers for NIM runtimes and inference backends, enabling reproducible, multi-arch, CUDA-optimized builds.
  • Develop Python tooling and services for build orchestration, CI/CD integrations, Helm/Operator automation, and test harnesses, enforcing quality with typing, linting, and unit/integration tests.
  • Help design and evolve Kubernetes deployment patterns for NIMs, including GPU scheduling, autoscaling, and multi-cluster rollouts.
  • Optimize container performance: layer layout, startup time, build caching, runtime memory/IO, network, and GPU utilization; instrument with metrics and tracing.
  • Evolve the base image strategy, dependency management, and artifact/registry topology.
  • Collaborate across research, backend, SRE, and product teams to ensure day-0 availability of new models.
  • Mentor teammates and set high engineering standards for container quality, security, and operability.

Requirements

  • 10+ years building production software with a strong focus on containers and Kubernetes
  • Strong Python skills building production-grade tooling/services
  • Experience with Python SDKs and clients for Kubernetes and cloud services
  • Expert knowledge of Docker/BuildKit, containerd/OCI, image layering, multi-stage builds, and registry workflows
  • Deep experience operating workloads on Kubernetes
  • Strong understanding on LLM inference features, including structured output, KV-cache, and LoRa adapter
  • Hands-on experience building and running GPU workloads in k8s, including NVIDIA device plugin, MIG, CUDA drivers/runtime, and resource isolation
  • Excellent collaboration and communication skills; ability to influence cross-functional design

Qualifications

  • A degree in Computer Science, Computer Engineering, or a related field (BS or MS) or equivalent experience.
  • 10+ years building production software with a strong focus on containers and Kubernetes.

Nice to Have

  • Expertise with Helm chart design systems, Operators, and platform APIs serving many teams
  • Experience with OpenAI API, Hugging Face API as well as understanding difference inference backends (vLLM, SGLang, TRT-LLM)
  • Background in benchmarking and optimizing inference container performance and startup latency at scale
  • Prior experience designing multi-tenant, multi-cluster, or edge/air-gapped container delivery
  • Contributions to open-source container, k8s, or GPU ecosystems

Skills

Python * Kubernetes * Docker * GPU * Helm * CUDA * vLLM * OCI * ContainerD * OpenAI API * SGLang * MIG * TRT-LLM * BuildKit * LLM inference * NVIDIA device plugin * Hugging Face API *

* Required skills

Benefits

Generous benefits package
Equity
Competitive salaries

About NVIDIA

Technology
View all jobs at NVIDIA →