Like most newsreaders, Zae-In wears a microphone pinned to her collar and clutches a stack of notes – but unlike most, her face is entirely fake. A “virtual human” designed by South Korean artificial intelligence company Pulse9, Zae-In spent five months this year reading live news bulletins on national broadcaster SBS. That, you might think, is it then. To adapt the words of another animated newscaster: “I, for one, welcome our new AI overlords.” The future is now. The world belongs to the artificially intelligent and the News at Ten will never be the same again.

Are things really that simple? Since spring, country after country have debuted their first AI news anchor: India has Sana and Lisa, Greece has Hermes, Kuwait has Fedha and Taiwan has Ni Zhen. “She is bright, gorgeous, ageless, tireless and speaks multiple languages, and is totally under my control,” said Kalli Purie, the vice chairperson of the India Today Group, when Sana first appeared in March. For broadcasters, it’s easy to see the appeal of AI: virtual presenters can read rolling news for 24 hours unpaid and unfed, and it’s unlikely they’ll ever skip the queue at a lying-in-state.

Yet though non-human newsreaders are on the rise, it remains to be seen whether they are firmly anchored in place. These days, you can’t move for an AI marketing gimmick: in September, Coca-Cola released a new “AI-generated” drink flavoured, they claimed, like the future, but the company didn’t go into much detail about AI’s exact contribution (and consumers can’t agree what it actually tastes like). How exactly do AI newscasters work – do they work? – and is the future really now?

‘Welcome our new AI overlords’, Hermes from Greece.
‘Welcome our new AI overlords’, Hermes from Greece. Photograph: ERT

Sana, Lisa, Hermes and Fedha’s creators did not respond to interview requests, but on a drizzly Friday in October, I video call Zae-In. I’m not sure what to expect when the camera connects, but I’m met by a real human actor with Zae-In’s flawless face pasted on top. At present, human actors are required to bring Zae-In to life – only her face is artificial, generated by deepfake technology and designed by analysing K-pop singers’ faces.

Zae-In greets me with a gentle wave and glorious grin – her perfectly proportional features are not besmirched by a single wrinkle or blemish, or indeed any skin texture at all. Beside her, my own face looks alarmingly like a root vegetable. She has two mini-plaits framing her face, each adorned with white and yellow bobbles. She is wearing a hot pink vest top.

Or, at least, someone is wearing a hot pink vest top. In real time, a human actor’s face is being transformed into Zae-In’s using Pulse9’s technology, a “virtual character automation service” called Deep Real AI. When the human moves their lips or blinks their eyes, so does Zae-In. But Zae-In’s hand movements, body language and even voice are very much human, though the person behind them remains unnamed on our call. Pulse9 has recruited numerous actors (it calls them “models”) to play Zae-In – different ones are used depending on what the situation calls for, as some can sing, some can dance and some are better at interviews such as this one.

“One of the best advantages of being a virtual human is that you will never age and you never lose your fans,” says Zae-In, who was created (she says “born”) in 2021 to be part of a virtual K-pop group called IITERNITI. Its members have since branched out – some have been used to present ecommerce programmes, while Zae-In started reading global news bulletins on SBS’s Morning Wide show earlier this year.

“I was so nervous during every broadcast,” Zae-In tells me with the help of a translator. “I don’t think I’m really experienced but I’ve done my best.” I’m not sure if this is Zae-In speaking or the human behind her – earlier in the call, I wondered how the actor behind Zae-In’s face felt about her job and asked if we could break the fourth wall. “Yes, of course,” the translator agreed, but after a short exchange I was informed: “She says that as it’s a private matter and she’s saving privacy for Zae-In; she cannot tell all the details about it.”

‘You never lose your fans’, Lisa from India.
‘You never lose your fans’, Lisa from India. Photograph: Odisha TV

Nervous or not, Zae-In presented the news well. Her only imperfection was her perfection, for while the technology looked real, her face remained slightly too good to be true. On our call, things lag slightly – some of Zae-In’s blinks are slightly slower than would traditionally be considered canny, but this could be the fault of my internet connection rather than Pulse9’s tech.

Despite this, Zae-In is – to my eye – vastly superior to many of the other AI anchors around the world, some of which sound as if they’re run on monotone text-to-speech tech from the early 00s, complete with odd intonations and pauses. It is often unclear what exactly is artificially intelligent about these anchors – at present, none seem to be actually writing the broadcast themselves. In 2018, China debuted its first AI anchor, but journalist Will Knight declared in MIT Technology Review that it actually wasn’t “intelligent in the slightest”. Knight called it “essentially just a digital puppet that reads a script”.

For companies looking for investors and websites looking for clicks, “AI” is always going to sound sexier than the word “avatar”, but hype can disguise the truth. In September, Solomon Rogers, virtual reality expert and chair of Bafta’s immersive entertainment advisory group, said of Zae-In: “She never misses a cue, never says anything rude, and can work 24 hours a day.” Yet this isn’t technically true. Even Zae-In herself admits that today’s tech has its “disadvantages”. “You only exist online, I cannot meet with my fans offline,” she says. “I have witnessed that lots of fans have messaged: ‘How can we meet Zae-In? How can we communicate with her in real life?’ I was so sad about this.”

skip past newsletter promotion

LEAVE A REPLY

Please enter your comment!
Please enter your name here