Sam Altman’s Trust Issues at OpenAI
Falha ao colocar no Carrinho.
Falha ao adicionar à Lista de Desejos.
Falha ao remover da Lista de Desejos
Falha ao adicionar à Biblioteca
Falha ao seguir podcast
Falha ao parar de seguir podcast
-
Narrado por:
-
De:
Sobre este título
At the end of February, OpenAI’s C.E.O., Sam Altman, made headlines by swiftly cutting a deal with the Pentagon for his company to replace Anthropic, which had balked at the Trump Administration’s bid to use its A.I. technology to power autonomous weapons and aid in mass surveillance. Days earlier, Altman had publicly supported Anthropic’s position in the dispute. Altman’s rise to power and his founding of OpenAI were predicated on placing safety above other concerns in developing artificial general intelligence. Why did he change his stance on such a fundamental issue? The New Yorker writers Ronan Farrow and Andrew Marantz spoke with Altman multiple times and interviewed more than a hundred people for their investigation into the leader of one of the most powerful companies in the world, comparing Altman to J. Robert Oppenheimer. Although there is no smoking gun in Altman’s hand, the writers find that persistent allegations about his conduct underscore the danger of entrusting him to wield such vast power over the future.
Further reading:
- "Sam Altman May Control Our Future—Can He Be Trusted?,” by Ronan Farrow and Andrew Marantz
- “The Dangerous Paradox of A.I. Abundance,” by John Cassidy
- “The A.I. Bubble Is Coming for Your Browser,” by Kyle Chayka
New episodes of The New Yorker Radio Hour drop every Tuesday and Friday. Join host David Remnick as he discusses the latest in politics, news, and current events in conversation with political leaders, newsmakers, innovators, New Yorker staff writers, authors, actors, and musicians.