The camera never lies. Except, of course, it does – and seemingly more often with each passing day. In the age of the smartphone, digital edits on the fly to improve photos have become commonplace, from boosting colours to tweaking light levels. Now, a new breed of smartphone tools powered by artificial intelligence (AI) are adding to the debate about what it means to photograph reality.
Google’s latest smartphones released last week, the Pixel 8 and Pixel 8 Pro, go a step further than devices from other companies. They are using AI to help alter people’s expressions in photographs. It’s an experience we’ve all had: one person in a group shot looks away from the camera or fails to smile. Google’s phones can now look through your photos to mix and match from past expressions, using machine learning to put a smile from a different photo of them into the picture. Google calls it Best Take. The devices also let users erase, move and resize unwanted elements in a photo – from people to buildings – “filling in” the space left behind with what’s called Magic Editor. This uses what’s known as deep learning, effectively an artificial intelligence algorithm working out what textures should fill the gap by analysing the surrounding pixels it can see, using knowledge it has gleaned from millions of other photos. It doesn’t have to be pictures taken on the device. Using the Pixel 8 Pro you can apply the so-called Magic Editor or Best Take to any pictures in your Google Photos library.
‘Icky and creepy’
For some observers this raises fresh questions about how we take photographs. Andrew Pearsall, a professional photographer, and senior lecturer in Journalism at the University of South Wales, agreed that AI manipulation held dangers. “One simple manipulation, even for aesthetic reasons, can lead us down a dark path,” he said. He said the risks were greater for those who used AI in professional contexts but there were implications to for everyone to consider. “You’ve got to be very careful about ‘When do you step over the line?’. “It’s quite worrying now you can take a picture and remove something instantly on your phone. I think we are moving into this realm of a kind of fake world.” Speaking to the BBC, Google’s Isaac Reynolds, who leads the team developing the camera systems on the firm’s smartphones, said the company takes the ethical consideration of its consumer technology seriously. He was quick to point out that features like Best Take were not “faking” anything.
And all of the reviewers who raised concerns about the tech praised the quality of the camera system’s photos. “You can finally get that shot where everyone’s how you want them to look- and that’s something you have not been able to do on any smartphone camera, or on any camera, period,” Reynolds said. “If there was a version [of the photo you’ve taken] where that person was smiling, it will show it to you. But if there was no version where they smiled, yeah, you won’t see that,” he explained. For Mr Reynolds, the final image becomes a “representation of a moment”. In other words, that specific moment may not have happened but it’s the picture you wanted to happen created from multiple real moments.
‘People don’t want reality’
Professor Rafal Mantiuk, an expert in graphics and displays at the University of Cambridge, said it was important to remember that the use of AI in smartphones was not to make the photographs look like real life. “People don’t want to capture reality,” he said. “They want to capture beautiful images. The whole image processing pipeline in smartphones is meant to produce good-looking images – not real ones.” The physical limitations of smartphones mean they rely on machine learning to “fill in” information that doesn’t exist in the photo. This helps improve zoom, improve low light photographs, and – in the case of Google’s Magic Editor feature – add elements to photographs that were either never there or swapping in elements from other photos, such as replacing a frown with a smile. Manipulation of photographs is not new – it’s as old as the art form itself. But never has it been easier to augment the real thanks to artificial intelligence. Earlier this year Samsung came in for criticism for the way it used deep learning algorithms to improve the quality of photos taken of the Moon with its smartphones. Tests found it didn’t matter how poor an image you took to begin with, it always gave you a useable image. In other words – your Moon photo was not necessarily a photo of the Moon you were looking at. The company acknowledged the criticism, saying it was working to “reduce any potential confusion that may occur between the act of taking a picture of the real Moon and an image of the Moon”. On Google’s new tech, Reynolds says the company adds metadata to its photos – the digital footprint of an image – using an industry standard to flag when AI is used. “It is a question that we talk about internally. And we’ve talked at length. Because we’ve been working on these things for years. It’s a conversation, and we listen to what our users are saying,” he says. Google is clearly confident users will agree – the AI features of its new phones are at the heart of its advertising campaign. So, is there a line Google would not cross when it comes to image manipulation? Mr Reynolds said the debate about the use of artificial intelligence was too nuanced to simply point to a line in the sand and say it was too far. “As you get deeper into building features, you start to realise that a line is sort of an oversimplification of what ends up being a very tricky feature-by-feature decision,” he says. Even as these new technologies raise ethical considerations about what is and what isn’t reality, Professor Mantiuk said we must also consider the limitations of our own eyes. He said: “The fact that we see sharp colourful images is because our brain can reconstruct information and infer even missing information. “So, you may complain cameras do ‘fake stuff’, but the human brain actually does the same thing in a different way.”