Amazon is developing new technology for its Alexa assistant that will be able to imitate the voice of any person, dead or alive, using less than a minute of recorded audio. Yahoo! dead people: new technology for Alexa alerted users” />
At the Re:Mars conference in Las Vegas, Nevada on June 22, Amazon Senior Vice President and Chief Scientist Rohit Prasad demonstrated this feature using a video of a child asking an Amazon device: “Alexa, can grandma finish reading me “ The Wizard of Oz”?”
Alexa acknowledges the request in her default automatic voice, then immediately switches to the human, soft and kind tone of the baby's grandmother.
During the demonstration, Prasad noted that the feature could be used to memorialize a deceased family member. “Many of us have lost those we love,” he said. “Modern reality has pushed us to make artificial conversations a key activity of the company.”
“Although this will not relieve you of the pain of loss, it will definitely help to prolong good memories,” — sure Prasad.
But, despite the uplifting emotional nature of the presentation, the new features of Alexa caused a sharp reaction from some representatives of the technology world. They saw mimicry of the voice not only as a means of emotional connection, but also as an ideal tool for deepfakes, criminal scams and other nefarious purposes.
Amazon spokesman said Prasad's presentation was based on Amazon's text-to-speech research, which the company is investigating using the latest advances in technology.
“We have learned how to produce high quality voice with much less data compared to recording in a professional studio,” the spokesperson said.
The voice simulation feature is currently in development and the company did not say when it intends to release it to the general public.
According to Amazon's VP, the new voice-to-speech technology will only take “less than a minute of recorded audio” to produce a high-quality voice.This technology may one day become ubiquitous, and Prasad noted that it can be used to build trust between users and their devices.
“One thing that surprised me the most about Alexa was the friendly the relationship we have with her. In this role, the human qualities of empathy are key to building trust,” he stressed.
the new imitation feature is groundbreaking and raises concerns, including among companies working in the field, that it can be used for nefarious purposes.
Microsoft, which also created voice simulation technology to help people with speech impairments, has limited the segments of its business that can use the technology. Natasha Crampton, chief artificial intelligence officer at Microsoft, said the company fears that innovation will be used to create fakes.
The new feature is also causing concern on the Internet.
“Remember when we told you that deepfakes would exacerbate the distrust, alienation, and epistemological crisis that has already begun in this culture? There will be much more of this now,” says Twitter user @wolven.
Some fear that scammers will be able to easily use this technology to their advantage.
Mike Butcher, editor of TechCrunch ClimateTech, noted: “Alexa imitating a dead family member sounds like a recipe for deep psychological damage.” Others advised people to stop buying the device entirely.