Deepfakes – apparent digital recordings which use AI to generate false representations of people saying things they never said – have been around for more than a year now, but are now starting to permeate the public consciousness in earnest. The general public has become increasingly aware of how convincing digital facsimiles can be, through popular websites such as thispersondoesnotexist.com, and rising media interest over the last several months: recent research from cybersecurity firm Deeptrace, for instance, showed a doubling in the prevalence of deepfakes doubled in the nine months to October 2019. The question for operators here is, in essence at least, relatively straightforward: are deepfakes a potential threat to the mobile industry and its customers, and if so what role can operators play in defending against it?
While for now deepfakes appear overwhelmingly to be pornographic in nature – as much as 96%, according to Deeptrace – there are increasing concerns over their potential role in political processes. In one rather alarming example last month, deepfake videos of UK Prime Minister Boris Johnson and his rival Jeremy Corbyn appeared to endorse each other, urging voters to support their opponents in the country’s present high-stakes election. And when Gabon’s President was recently taken out of the country for medical care, and his administration released a video to demonstrate that he was alive and well – only for his opponents to declare the video a fake (it was in fact real).
Internet companies like Google are working on ways to address that confusion, which some call ‘the liar’s dividend’, by developing tools to identify audiovisual content generated by AI. “You can already see a material effect that deepfakes have had,” explains Nick Dufour, one of the engineers overseeing Google’s research. “They have allowed people to claim that video evidence that would otherwise be very convincing is a fake.” Social media giants like Facebook and Twitter are also developing their approaches to deepfakes, illustrating the seriousness with which they are now taken.
There’s a side to this issue which has so far been attracting rather less attention in the media, however, and that’s financial fraud. As deepfakes mature, there is a real prospect that they’ll become a major tool to mislead financial and commercial organisations into parting with money or data. Fraud of this kind is already taking place via deepfakes, and, according to a report released by Experian this week, we should expect many more in the new year. In one high profile case earlier this year, a deepfake phone call falsely convinced the CEO of a UK energy firm that he was speaking to a colleague, and demanded he send $243,000 to an account he believed was that of an overseas supplier. It may seem hard to imagine falling prey to a scam of this kind – most of us recognise a regular voice bot the moment we pick up the phone – but the technology is clearly becoming far more convincing.
While governments consider how to legislate against malicious deepfakes – California has become the first US state to criminalise the use of deepfakes in political campaigning, for instance, and last week China announced moves to curtail unlabelled deepfakes – it falls upon the tech sector to provide more adaptive solutions. The internet is a notoriously difficult space to regulate by its very nature, so proactive approaches – which help to identify deepfakes, and prevent criminal or misleading uses – are ultimately more constructive than purely reactive punishments, which may be problematic to enforce. The mobile industry and wider tech sector have a natural interest in protecting their customers and the wider digital ecosystem from malicious deepfakes, and are those with the means to do so – and, as with any development of this kind, that starts with fostering discussion amongst those who can help.
So what do deepfakes mean for the future of biometric authentication – did verifying your identity with a facial scan or voice check just become obsolete? Thankfully no, that remains a very distant prospect. There are currently for instance no known deepfakes capable of generating synthetic responses to good quality ‘liveness checks’ used to guard against spoofing in biometric authentication systems. Most leading remote KYC onboarding players have embedded some form of liveness detection into their verification processes, meaning they make requests of the user to respond in real time to prompts designed to gauge whether they are real. As Alesis Novik, CTO at Aimbrain, specialists in biometric facial authentication, explains: ”A randomised challenge lip sync liveliness test checks both the video and audio channels, requiring the bad actors to generate in real-time artefact-free video and audio response to the challenge, which is not currently feasible. ”
One of the most frequently cited routes to combatting the threat of deepfakes is by distributed ledger technologies like blockchain, which create a kind of digital audit trail to establish the provenance and authorship of a given piece of online content. “When it comes to the area of deepfake, blockchain can come to the fore to provide some levels of security, approval and validation,” explains Kevin Gannon, blockchain solutions lead at PwC. “Blockchain has typically been touted as a visibility and transparency play, where once something is done, the ‘who’ and ‘when’ becomes apparent. But it can go further: when a user with a digital identity wants to do something, they could be prompted for proof of their identity before access to something like funds can be granted.”
A key priority while these more novel solutions are developed, then, will be to strengthen requirements for multifactor authentication on financially or politically sensitive platforms. The mobile industry has been taking a major role in the development of blockchain in digital identity, but it can also help keep deepfakes in check while that process plays out – via its unique capacity to identify users by their behaviour, network attributes and device ownership. Additionally, with the expansion of the IoT, operators will become custodians of ever more vast channels of data – meaning they can increasingly look beyond their traditional commercial horizons as providers of connectivity, and help build the next generation of analytics-driven platforms. This means they can be at the forefront of developing the AI tools needed to spot deepfakes in real time – as deepfakes are made possible by AI, it’s very much a case of fighting ‘fire with fire’.
It’s an increasingly necessary fight, which the mobile industry is primed to take on – and with its stable of interested partner industries, from airlines to insurers, we can expect some interesting developments on this front in the new year.