- Job ID:J53022
- Job Title:AI Operations Platform Consultant
- Location:Jersey City, NJ
- Duration:24 Months + Extension
- Hourly Rate:Depending on Experience (DOE)
- Work Authorization:US Citizen, Green Card, OPT-EAD, CPT, H-1B,
H4-EAD, L2-EAD, GC-EAD - Client:To Be Discussed Later
- Employment Type:W-2, 1099, C2C
Job Description: - Brings extensive experience operating large-scale GPU-accelerated AI platforms, deploying and managing LLM inference systems on Kubernetes with strong expertise in Triton Inference Server and TensorRT-LLM.
- They have repeatedly built and optimized production-grade LLM pipelines with GPU-aware scheduling, load balancing, and real-time performance tuning across multi-node clusters. Their background includes designing containerized microservices, implementing robust deployment workflows, and maintaining operational reliability in mission-critical environments.
- They have led end-to-end LLMOps processes involving model versioning, engine builds, automated rollouts, and secure runtime controls.
- The candidate has also developed comprehensive observability for inference systems, using telemetry and custom dashboards to track GPU health, latency, throughput, and service availability.
- Their work consistently incorporates advanced optimization methods such as mixed precision, quantization, sharding, and batching to improve efficiency. Overall, they bring a strong blend of platform engineering, AI infrastructure, and hands-on operational experience running high-performance LLM systems in production
Basic Info: - AI Operations Platform Consultant
- Experience deploying, managing, operating, and troubleshooting containerized services at scale on Kubernetes for mission-critical applications (OpenShift)
- Experience with deploying, configuring, and tuning LLMs using TensorRT-LLM and Triton Inference server.
- Managing MLOps/LLMOps pipelines, using TensorRT-LLM and Triton Inference server to deploy inference services in production
- Setup and operation of AI inference service monitoring for performance and availability.
- Experience deploying and troubleshooting LLM models on a containerized platform, monitoring, load balancing, etc.
- Operation and support of MLOps/LLMOps pipelines, using TensorRT-LLM and Triton Inference server to deploy inference services in production
- Experience deploying and troubleshooting LLM models on a containerized platform, monitoring, load balancing, etc.
- Experience with standard processes for operation of a mission critical system – incident management, change management, event management, etc.
- Managing scalable infrastructure for deploying and managing LLMs
- Deploying models in production environments, including containerization, microservices, and API design
- Triton Inference Server, including its architecture, configuration, and deployment.
- Model Optimization techniques using Triton with TRTLLM
- Model optimization techniques, including pruning, quantization, and knowledge distillation
Equal Opportunity Employer ROBOTICS TECHNOLOGIES LLC is an equal opportunity employer inclusive of female, minority, disability and veterans, (M/F/D/V). Hiring, promotion, transfer, compensation, benefits, discipline, termination and all other employment decisions are made without regard to race, color, religion, sex, sexual orientation, gender identity, age, disability, national origin, citizenship/immigration status, veteran status or any other protected status. ROBOTICS TECHNOLOGIES LLC will not make any posting or employment decision that does not comply with applicable laws relating to labor and employment, equal opportunity, employment eligibility requirements or related matters. Nor will ROBOTICS TECHNOLOGIES LLC require in a posting or otherwise U.S. citizenship or lawful permanent residency in the U.S. as a condition of employment except as necessary to comply with law, regulation, executive order, or federal, state, or local government contract