AI Testing: How to Ensure Quality in Non-Deterministic Systems with Adam Sandman
Falha ao colocar no Carrinho.
Falha ao adicionar à Lista de Desejos.
Falha ao remover da Lista de Desejos
Falha ao adicionar à Biblioteca
Falha ao seguir podcast
Falha ao parar de seguir podcast
-
Narrado por:
-
De:
Sobre este título
How do you ensure software quality when the system you're testing doesn't give the same output twice?
Go to https://links.testguild.com/inflectra and start your free 30-day trial, no credit card, no contract required.
That's the core challenge facing every QA team building or testing AI-powered applications today and it's breaking all the rules we've relied on for decades.
In this episode of the TestGuild Automation Podcast, I sit down with Adam Sandman, co-founder of Inflectra, to get into what non-deterministic AI testing actually means in practice, why traditional pass/fail testing no longer cuts it, and what quality professionals need to do differently right now.
We cover:
- Why AI-generated code is raising the stakes for QA teams while budgets stay flat
- The fundamental difference between deterministic and non-deterministic systems — and why it changes everything about how you test
- How to set acceptable risk thresholds for AI systems (hint: it depends on whether you're building an e-commerce chatbot or an air traffic control system)
- Why testers who embrace AI as a tool — not a threat — will be the ones leading their organizations forward
- How a live demo failure at a conference inspired Inflectra's new non-deterministic testing tool, SureWire
If you're a tester, QA manager, or automation engineer trying to figure out how to keep up with AI-driven development without losing your mind — or your job — this one's for you.