Consumer Awareness of Celebrity Deepfakes: Survey Data

digital faces
Adobe Stock; VIP

In this article

  • Most U.S. adults haven’t encountered a celebrity deepfake, but young people were more likely to, per a VIP /SmithGeiger survey
  • Over half of consumers are aware of deepfake harms, such as disinformation, impersonation and fake endorsements
  • Public education about synthetic media will help mitigate harm, but technical solutions will be critical in the long term

Most consumers haven’t yet encountered and been fooled by fake AI content that featured a celebrity likeness, but it’s more common among younger people, according to a November 2024 VIP survey conducted with global consultative market research firm SmithGeiger Group

On average, 30% of U.S. adults indicated they had encountered a deepfake or AI image of a celebrity that they believed was authentic and later learned wasn’t. However, younger people were much more likely to have had such an experience.

That could mean young people are more likely to be exposed because they tend to be more engaged on the platforms where deepfakes most rampantly circulate, particularly social media. While it’s possible that younger people are simply less dubious of such content, it’s understood that the realism and naturalism of AI imagery, video and voice are reaching a point where even experts trained to recognize the telltale signs of synthetic media cite difficulty distinguishing real from fake without using sophisticated tools.

Taken together, greater exposure to and believability of such content is already showing signs of higher rates of people being taken in by it, wrongly thinking it’s authentic.

Young people were in fact more likely than older adults to come across fraudulent celebrity deepfakes on social platforms, with 55% of U.S. adults ages 18-34 reporting they had seen one on social media versus under 40% for older groups. Young people's exposure further translated to higher likelihood of both spreading such content and engaging with scam content.

Similarly, younger adults tended to be more likely than older people to know about specific high-profile deepfake incidents involving celebrity likenesses. That tended to be true for all incidents cited in our survey, except for the AI image of Taylor Swift shared by Donald Trump on Truth Social, a post many interpreted as intending to falsely suggest that she had endorsed him for president.

Nonconsensual celebrity deepfakes and synthetic content have surged online, as VIP has previously discussed. On the public internet, AI misuse of celebrity name, image, likeness and voice (NILV) commonly takes the form of fraudulent endorsements and deepfake nonconsensual intimate images (NCII) predominantly appearing on social media platforms, with a remainder on adult content sites and elsewhere on the web.

Digital ads on social media or other sites falsely display celebrity faces or voices manipulated with AI to drive users to take an action related to a real product or service (e.g., purchase, subscribe, download) or a scam (enter a fake giveaway that captures the user’s personal information). Existing quick-monetization or scammer incentives are at play and benefit from association with the celebrity likeness, in effect taking advantage of a celebrity’s own fan base.

Among its multifaceted harms, the dissemination of this content has the potential to damage personal, professional and corporate reputations, devalue IP by association, cause significant emotional distress and defraud or mislead consumers when used in sophisticated scams and disinformation.

Even if most consumers haven’t encountered celebrity deepfakes themselves, most are at least aware of their potential harms, including that they can result in fake news and disinformation (68%), impersonation (58%), political manipulation (58%) and fake product endorsement (56%). Nearly half are aware of their potential to be used for consumer scams.

Educating the public about synthetic media as part of broader media literacy programs is commonly given as one among a series of critical measures for society to broadly mitigate the harms of deepfakes on information integrity online. Among the mitigation strategies that have been proposed, consumers tended to estimate both nontechnical and technical solutions as being equally effective.

For their part, computer science experts had greater hopes for technical solutions that would certify the authenticity of digital content, an area of development broadly termed “synthetic media provenance” that includes solutions such as forensic watermarking, hashing and cryptographic metadata.