How We Cut LLM Latency 70% With TensorRT in Production
Falha ao colocar no Carrinho.
Falha ao adicionar à Lista de Desejos.
Falha ao remover da Lista de Desejos
Falha ao adicionar à Biblioteca
Falha ao seguir podcast
Falha ao parar de seguir podcast
-
Narrado por:
-
De:
Sobre este título
Maher Hanafi is an engineering leader who went from zero AI experience to self-hosting LLMs at enterprise scale — managing GPU costs, optimizing inference with TensorRT LLM, and building an AI platform for HR tech. In this conversation, he breaks down exactly how his team cut latency by 70%, reduced GPU spend through counterintuitive scaling strategies, and navigated the messy reality of taking AI from proof-of-concept to production.
How We Cut LLM Latency 70% With TensorRT in Production // MLOps Podcast #369 with Maher Hanafi, SVP of Engineering at Betterworks
Key topics covered:
The AI Iceberg — Why the invisible work behind AI (performance, latency, throughput, cost, accuracy) is harder than building the features themselves
GPU Cost Optimization — How upgrading to more expensive GPUs actually saved money by reducing total runtime hours
TensorRT LLM Deep Dive — Rewiring neural networks to match GPU architecture for 50-70% latency reduction
Cold Start Solutions — Using AWS FSx, baking models into container images, and cutting minutes off spin-up times
KV Cache & In-Flight Batching — Why using one model per GPU with maximum KV cache beats cramming multiple models together
Scheduled & Dynamic Scaling — Pattern-based scaling for HR tech workloads (nights, weekends, end-of-quarter spikes)
Verticalized AI Platform — Building horizontal AI infrastructure that serves multiple HR product verticals
AI Engineering Lab — How junior vs. senior engineers adopted AI coding tools differently, and the cultural shift that followed
Agentic Coding in Practice — Navigating AI coding agent costs, quality control, and redefining the SDLC
Chinese Models & Compliance — Why enterprise customers block DeepSeek/Qwen and the geopolitics of model training data
This episode is for engineering leaders building AI in production, MLOps engineers optimizing GPU infrastructure, and anyone navigating the gap between AI demos and enterprise-scale deployment.
Links & Resources:
TensorRT LLM: https://github.com/NVIDIA/TensorRT-LLM
NVIDIA Run: ai Model Streamer (cold start optimization): https://developer.nvidia.com/blog/reducing-cold-start-latency-for-llm-inference-with-nvidia-runai-model-streamer/
vLLM vs TensorRT-LLM comparison: https://northflank.com/blog/vllm-vs-tensorrt-llm-and-how-to-run-them
Timestamps:
0:00 — Intro & teaser clips
1:00 — Maher's journey from traditional engineering to AI leadership
4:30 — The AI iceberg: cost, performance, latency, throughput, accuracy
8:00 — Managing AI coding agent costs & premium token budgets
12:00 — GPU scaling strategies: scheduled, dynamic, and proactive
16:00 — Cold start problem: FSx, baked images, and container optimization
20:00 — TensorRT LLM: 50-70% latency reduction explained
25:00 — KV cache, in-flight batching, and throughput optimization
30:00 — The counterintuitive math: bigger GPUs = lower cost
35:00 — Verticalized AI products for HR tech40:00 — Building a horizontal AI platform with preprocessing layers
45:00 — AI feedback polishing: the feature that needed guardrails
50:00 — AI Engineering Lab: adoption curves by seniority
55:00 — Redefining the SDLC for AI-assisted development
60:00 — Self-hosting coding agents & leveraging internal AI platform
63:00 — Chinese models, compliance, and training data bias