One of the places SFF stories live is where the edge of science and the corner of ethics should meet, but don’t. Is it regulated?
Science so often outpaces regulation or societal conversation.
Lyrebird is an AI voice mimic. I heard an NPR interview a few months ago with the app’s developers and sat in stunned silence at the implications. Apparently, other more enterprising (and criminal) people, decided to put the app to use.
Problem: it wasn’t his boss. The money is gone.
“The victim director was first called late one Friday afternoon in March, and the voice demanded he urgently wire money to a supplier in Hungary to help save the company in late-payment fines. The fake executive referred to the director by name and sent the financial details over email.”
This is one of those advances with the potential to change everything because you don’t need to be a hacker or expensive equipment to have a big impact. You just need a phone.
There are a lot of angles for this story: the criminal, the victim, and the cybersecurity agents who investigate (and try to prevent) these crimes. There’s also the stretch to imagine what else could happen when you can’t trust your own eyes or ears.
There’s also this Orwellian statement from Lyrebird’s ethics link. Talk about a scary prompt for Halloween…
“Second, we want to ensure that your digital voice is yours. We are stewards of your voice, but you control its usage: no one can use it without your explicit consent.”