AI and deepfakes bring history to life – but at a high moral price



To mark Israel Memorial Day in 2021, the Israel Defense Forces music ensembles partnered with a company specializing in synthetic video, also known as “deepfake” technology, to bring to life photos from the 1948 Israeli-Arab war.

They produced a video in which young singers in period uniforms and with period weapons sang “Hareut”, an iconic song in memory of soldiers who fell in battle. As they sing, the musicians stare at faded black and white photos they hold in their hands. The young soldiers in the old pictures blink and smile back thanks to artificial intelligence.

The result is scary. The past comes alive Harry Potter Style.

Over the past few years, my colleagues and I at UMass Boston’s Applied Ethics Center have been researching how everyday use of AI challenges the way people think about themselves and politics. We found that AI has the potential to weaken people’s ability to make normal judgments. We have also found that it undermines the role of chance in their lives and can lead them to question their knowledge or opinion about human rights.

Now, AI is making it easier than ever to revive the past. Will that change our understanding of history and, with it, ourselves?

Musicians disguised as soldiers connect with soldiers in old photos in a production of the Israel Defense Forces and the D-ID company in 2021.

Moral cost

The desire to bring the past back to life is not new. Reenactments of the civil war or the war of independence are the order of the day. In 2018, Peter Jackson restored and colored photographs from the First World War in order toYou shouldn’t get old“, A film that the audience of the 21.

Live re-enactments and carefully edited historical footage are expensive and time-consuming endeavors. Deepfake technology democratizes such efforts and provides an inexpensive and widely used tool to animate old photos or create compelling fake videos from scratch.

But as with all new technologies, along with the exciting possibilities, there are serious moral questions. And the questions get even trickier when these new tools are used to improve understanding of the past and revive historical episodes.

Eighteenth-century writer and statesman Edmund Burke believes that political identity is not just what people make of it. It is not just a product of our own manufacture. Rather, being part of a community means being part of a generation contract – part of a common company that connects the living, the dead and the future living.

If Burke is right to understand political affiliation in this way, deepfake technology offers an effective way to connect people with the past and forge that intergenerational contract. By bringing the past to life in a vivid and compelling way, technology enlivens the “dead” past, making it more alive and well. When these images inspire empathy and concern for the ancestors, deepfakes can make the past much more important.

But this ability comes with risks. An obvious danger is the creation of fake historical episodes. Imagined, mythologized and fake events can trigger wars: A historic defeat in the Kosovo battle from the 14th

Similarly, the second attack on American warships in the Gulf of Tonkin on August 4, 1964 was used to escalate American engagement in Vietnam. It later emerged that the attack never took place.

Atrophy of the imagination

It used to be difficult and expensive to stage fake events. No longer.

For example, imagine what strategically manipulated deepfake footage from the events of the 6th Effort was.

The result, of course, is that deepfakes can gradually destabilize the idea of ​​a historical “event”. Perhaps over time, as this technology evolves and becomes ubiquitous, people will automatically question whether what they see is real.

It is questionable whether this will lead to more political instability or – paradoxically to more stability due to the reluctance to act on the basis of possibly fabricated events.

But aside from the fears about the mass-production of the story, there are more subtle ramifications that worry me.

Yes, deepfakes let us experience the past more vividly and can thereby strengthen our commitment to history. But does this use of technology run the risk of stunting our imaginations – giving us prefabricated, limited images of the past that serve as standard associations for historical events? An effort of imagination can bring the horrors of World War II, the 1906 San Francisco earthquake, or the 1919 Paris Peace Conference into endless variations.

But will people keep using their imaginations this way? Or do deepfakes with their lifelike, moving representations become practical representatives of history? I fear that animated versions of the past might create the impression that the viewer knows exactly what happened – that the past is completely present to them – which then eliminates the need to learn more about the historical event.

People tend to think that technology makes life easier. What they fail to realize, however, is that their technological tools keep making toolmakers new – and worsening existing skills while opening up unimaginable and exciting opportunities.

With the advent of smartphones, photos could be easily put online. But it is also meant that some people no longer experience breathtaking views as they used to, because they are so fixated on capturing an “instagrammable” moment. Since the omnipresence of GPS, getting lost is no longer experienced in this way. Likewise, AI-generated deepfakes are not just tools that automatically improve our understanding of the past.

Still, this technology will soon revolutionize society’s connection with history, for better or for worse.

People have always been better at inventing things than thinking about what the things they invent do with them – “always more skillful with objects than with life,” as the poet WH Auden put it. This inability to envision the downside of technological advances is not fate. It is still possible to slow down and think about how best to experience the past.

Nir Eisikovits is Associate Professor of Philosophy and Director of the Center for Applied Ethics at the University of Massachusetts Boston.

This article first appeared on The Conversation.


Leave A Reply

Your email address will not be published.