Voice actors are increasingly being used to train artificial intelligence. What will be the consequences?
Posted: Wed Jan 29, 2025 6:03 am
Workers in various industries fear being replaced by software. This is already happening in Latin America: voice actors are being hired to record datasets and train algorithms that will eventually compete with them for roles. Developers of artificial intelligence for dubbing promise to cut costs for producers by allowing movie characters to speak any language while retaining their original voices.
coming to Latin America are from Spanish-speaking countries. The approach is always a little different.
Tel Aviv-based Deepdub promises synthetic voices that reproduce a full range of human emotions, suitable for TV shows and movies.
London-based Papercup focuses on non-fiction — content like cambodia number data what the BBC, Sky News, and Discovery show.
Seoul-based Klling combines AI-based dubbing with deepfake technology, synchronizing actors' lip movements with computer-generated voices.
Despite their differences, all of these companies defend their product as a cheaper and faster alternative to voice actors.
Oz Krakowski, director of revenue at Deepdub, says that when it comes to dubbing Hollywood blockbusters into other languages, artificial voices have an advantage. AI will allow a character to speak in the voice of a star — like Morgan Freeman — while preserving the original voice, while still using flawless Spanish with any local accent or dialect.
However, this does not impress actors and their fans in Latin America. Mexican dubbing actress Gabriela Guzmán says: “Maybe an artificial intelligence can imitate a voice, but it will never be able to act like a real actor. It simply has no soul, period.” However, the criticism concerns not only abstract concepts, but also quite practical ones that go beyond the boundaries of work on the screen.
“How do you bring an artificial intelligence to Comic-Con to talk to the audience? It’s just impossible… Voice actors have fans,” Guzman muses.
coming to Latin America are from Spanish-speaking countries. The approach is always a little different.
Tel Aviv-based Deepdub promises synthetic voices that reproduce a full range of human emotions, suitable for TV shows and movies.
London-based Papercup focuses on non-fiction — content like cambodia number data what the BBC, Sky News, and Discovery show.
Seoul-based Klling combines AI-based dubbing with deepfake technology, synchronizing actors' lip movements with computer-generated voices.
Despite their differences, all of these companies defend their product as a cheaper and faster alternative to voice actors.
Oz Krakowski, director of revenue at Deepdub, says that when it comes to dubbing Hollywood blockbusters into other languages, artificial voices have an advantage. AI will allow a character to speak in the voice of a star — like Morgan Freeman — while preserving the original voice, while still using flawless Spanish with any local accent or dialect.
However, this does not impress actors and their fans in Latin America. Mexican dubbing actress Gabriela Guzmán says: “Maybe an artificial intelligence can imitate a voice, but it will never be able to act like a real actor. It simply has no soul, period.” However, the criticism concerns not only abstract concepts, but also quite practical ones that go beyond the boundaries of work on the screen.
“How do you bring an artificial intelligence to Comic-Con to talk to the audience? It’s just impossible… Voice actors have fans,” Guzman muses.