Deception & Trust: A Deep Look At Deep Fakes

https://ift.tt/eA8V8J

With recent focus on disinformation and “fake news,” new technologies used to deceive people online have sparked concerns among the public. While in the past, only an expert forger could create realistic fake media, deceptive techniques using the latest research in machine-learning allow anyone with a smartphone to generate high-quality fake videos, or “deep fakes.”

Like other forms of disinformation, deep fakes can be designed to incite panic, sow distrust in political institutions, or produce myriad other harmful outcomes. Because of these potential harms, lawmakers and others have begun expressing concerns about deep-fake technology.

Underlying these concerns is the superficially reasonable assumption that deep fakes represent an unprecedented development in the ecosystem of disinformation, largely because deep-fake technology can create such realistic-looking content. Yet this argument assumes that the quality of the content carries the most weight in the trust evaluation. In other words, people making this argument believe that the highly realistic content of a deep fake will induce the viewer to trust it — and share it with other people in a social network — thus hastening the spread of disinformation.

But there are several reasons to be suspicious of that assumption. In reality, deep-fake technology operates similarly to other media that people use to spread disinformation. Whether content will be believed and shared may not be derived primarily from the content’s quality, but from psychological factors that any type of deceptive media can exploit. Thus, contrary to the hype, deep fakes may not be the techno-boogeyman some claim them to be.

Deceiving with a deep fake.

When presented with any piece of information — be it a photograph, a news story, a video, etc. — people do not simply take that information at face value. Instead, individuals in today’s internet ecosystem rely heavily on their network of social contacts when deciding whether to trust content online. In one study, for example, researchers found that participants were more likely to trust an article when it had been shared by people whom the individual already trusted.

This conclusion comports with an evolutionary understanding of human trust. In fact, humans likely evolved to believe information that comes from within their social networks, regardless of its content or quality.

At a basic level, one would expect such trust would be unfounded; individuals usually try to maximize their fitness (the likelihood they will survive and reproduce) at the expense of others. If an individual sees an incoming danger and fails to alert anyone else, that individual may have a better chance of surviving that specific interaction.

However, life is more complex than that. Studies suggest that in repeated interactions with the same individual, a person is more likely to place trust in the other individual because, without any trust, neither party would gain in the long term. When members of a group can rely on other members, individuals within the group gain a net benefit on average.

Of course, a single lie or selfish action could help an individual survive an individual encounter. But if all members of the group acted that way, the overall fitness of the group would decrease. And because groups with more cooperation and trust among their members are more successful, these traits were more likely to survive on an aggregate level.

Humans today, therefore, tend to trust those close to them in a social network because such behavior helped the species survive in the past. For a deep fake, then, the apparent authenticity of the video may be less of a factor in deciding whether to trust that information than whether the individual trusts who shares it.

Further, even the most realistic, truthful-sounding information can fail to produce trust when the individual holds beliefs that contradict the presented information. The theory of cognitive dissonance contends that when an individual’s beliefs contradict his or her perception, mental tension — or cognitive dissonance — is created. The individual will attempt to resolve this dissonance in several ways, one of which is to accept evidence that supports his or her existing beliefs and dismissing evidence that does not. This leads to what is known as confirmation bias.

One fascinating example of confirmation bias in action came in the wake of President Donald Trump’s press secretary claiming that more people watched Trump’s inauguration than any other inauguration in history. Despite the video evidence and a side-by-side photo comparison of the National Mall indicating the contrary, many Trump supporters claimed that a photo depicting turnout on Jan. 20, 2017, showed a fuller crowd than it actually did because they knew it was a photo of Trump’s inauguration (Sean Spicer later clarified that he was including the television audience as well as the in-person audience, but the accuracy of that characterization is also debatable.) In other words, the Trump supporters either convinced themselves that the crowd size was larger despite observable evidence to the contrary, or they knowingly lied to support — or confirm — their bias.

The simple fact is that it does not require much convincing to deceive the human mind. For instance, multiple studies have shown that rudimentary disinformation can generate inaccurate memories in the targeted individual. In one study, researchers were able to implant fake childhood memories in subjects by simply providing a textual description of an event that never occurred.

According to these theories, then, when it comes to whether a person believes a deep fake is real, the quality matters less than whether an individual has pre-existing biases or trusts the person who shared it. In other words, existing beliefs, not the perceived “realness” of a medium, drives whether new information is believed. And, given the diminished role that the quality of a medium plays in the believability calculus, more rudimentary methods — like using Photoshop to alter photographs — can achieve the same results as a deep fake in terms of spreading disinformation. Thus, while deep fakes present a challenge generally, deep fakes as a class of disinformation do not present an altogether new problem as far as believability is concerned.

Sharing Deep Fakes Online.

With the rise of social media and the fundamental change in how we share information, some worry that the unique characteristics of deep fakes could make them more likely to be shared online regardless of whether they deceive the target audience.

People share information — whether it be in written, picture or video form — online for many different reasons. Some may share it because it is amusing or pleasing. Others may do so because it offers partisan political advantage. Sometimes the sharer knows the information is false. Other times, the sharer does not know whether the information is accurate but simply does not care enough to correct the record.

People also tend to display a form of herd behavior in which seeing others share content drives the individual to share the content themselves. This allows disinformation to spread across larger platforms like Facebook or Twitter as the content builds up a base of sharing. The number of people who receive disinformation, then, can grow exponentially at a very rapid pace. As the popularity of a given piece of content increases, so too does its credibility as it reaches the edges of a network, exploiting the trust that individuals have in their social networks. And even if the target audience does not believe a given deep fake, widespread propagation of the content can still cause damage; simply viewing false content can reinforce beliefs that the user already has, even if the individual knows that the content is an exaggeration or a parody.

Deep fakes, in particular, present the audience with rich sound and video that engage the viewer. A realistic deep fake that can target the user’s existing beliefs and exploit his or her social ties, therefore, may spread rapidly online. But so, too, do news articles and simple image-based memes. Even without the richness of a deep fake, still images and written text can target the psychological factors that drive content-sharing online. In fact, image-based memes already spread at alarming rates due to their simplicity and the ease with which they convey information. And while herd-behavior tendencies will drive more people to share content, this applies to all forms of disinformation, not just deep fakes.

Currently, a video still represents an undeniable record of events for many people. But as this technology becomes more commonplace and the limitations of video become more apparent, the psychological factors above will drive trust and sharing. And the tactics that bad actors use to deceive will exploit these social patterns regardless of medium.

When viewed in this context, deep fakes are not some unprecedented challenge society cannot adapt to; they are simply another tool of disinformation. We should of course remain vigilant and understand that deep fakes will be used to spread disinformation. But we also need to consider that deep fakes may not live up to the hype.

Jeffrey Westling (@jeffreywestling) is a Technology and Innovation Research Associate at the R Street Institute.



Permalink | Comments | Email This Story

https://ift.tt/2UbI1aF

Comments

Popular posts from this blog

Takeaways from the outcry over Dior’s Johnny Depp perfume ad: Tuesday Wake-Up Call

When the coronavirus infodemic strikes

Patch, Or Your Solid State Drives Roll Over And Die