Käsitsi süvavõltsitud maailm. Postfenomenoloogiline uurimus inimese ja pilditehnika suhte muutustest süvavõltsingute kontekstis / The Handmade Deepfake World: A Postphenomenological Study of the Changing Human–Imaging Technology Relation...

Authors

  • Kevin Rändi Tallinna Ülikool / Tallinn University
  • Oliver Laas Tallinna Ülikool / Tallinn University

DOI:

https://doi.org/10.7592/methis.v27i34.24692

Keywords:

postfenomenoloogia, süvavõltsingud, tehisintellekt, hermeneutiline suhe, pilditehnika, postphenomenology, deepfakes, artificial intelligence, hermeneutic relation, imaging technology

Abstract

Teesid: Artikkel pöörab postfenomenoloogilise pilgu süvavõltsingutele. Postfenomenoloogia on subjektiivsest kogemusest lähtuv mõtteviis, mis uurib, kuidas tehnika kui materiaalsed esemed ja nende kasutusviisid mõjutavad tähendusloomet. Artiklis keskendume süvavõltsingutega kaasnevale inimese ja maailma suhtele, mille keskmes on pilditehnika ja salvestised. Oma käsitluses toome välja, et süvavõltsingute kogemusega kaasneb arusaam digitaalsete salvestiste episteemilistest normidest, mis sarnanevad käsitsi tehtud piltide omadega.

 

Artificial intelligence (AI) exerts a transformative pressure on our ongoing engagement with other technologies and on our meaning-making practices as they relate to ourselves as well as the world. Intriguing predictions and narratives about possible future outcomes, however, are not limited to AI as a ubiquitous technology that needs careful ethical consideration. Some relevant issues concern our present-day imaging technologies. Photos, videos, and other recordings have traditionally contributed to the creation of a shared understanding regarding the facts. Machine learning algorithms enable the creation of different forms of synthetic media, including deepfakes, which greatly simplify the manipulation of recordings. Deepfakes are notorious because they enable the generation of believable recordings of events that never took place by, for example, seamlessly replacing human faces and voices. After they emerged sometime around 2017, deepfakes have been used for generating involuntary pornographic material and spreading political disinformation. Such uses may be reason enough for taking legislative actions toward combating deepfakes.

Moral issues notwithstanding, an underlying intellectual concern has to do with the future of trust in  recordings as a means of establishing the facts. The ubiquity of AI tools that facilitate the manipulation of recordings in ways that make them indistinguishable from real ones could further exacerbate the problem of disinformation. As such, deepfakes are a serious threat that cause epistemic harm and could possibly bring about a form of dystopia called the ‘infocalypse’ (Schick 2020). This  possibly leads to a loss of control over information on a large scale.

We consider deepfakes from a phenomenological point of view that puts special emphasis on the role of technologies (or artefacts) as mediators of how the world gains its meaning (Verbeek 2005), known as postphenomenology. Don Ihde (1934–2024), the originator of postphenomenological thought, emphasised the practice-oriented, both culturally and technologically textured lifeworlds in which humans make sense of their surrounding world, and often use artefacts in diverse ways. Thus, postphenomenological research avoids totalising accounts of the future—like the infocalypse—and instead focuses on the complexities found in the practices and varieties of human–technology relations. Regarding deepfakes, this means taking a socio-technically embedded view of embodied human beings and their engagement with imaging technologies in which machine learning plays an active role in shaping our experience of the images.

From a postphenomenological perspective, the relation between humans and technology that characterises our use of imaging technologies is a hermeneutic one. The world mediated through technological imaging could be more ‘transparent’ for experts than non-experts and relies on strategies for interpreting the image (Rosenberger 2008). On the screens of our everyday life, even though many users are familiar with the technical or platform-related context of recordings, it could nevertheless be argued that deepfakes further dissolve our ability to interpret and change strategies to distinguish a truthful representation from a deceptive one.

However, a purely postphenomenological account would provide an incomplete understanding of deepfakes and the possibilities of meaning-making through imaging technologies. With the increasing involvement of AI and algorithms in imaging technologies, postphenomenology has recognised the need for expanding hermeneutic human-technology relations (Wiltse 2014; Wellner 2020). Furthermore, AI poses problems for artefact-centred approaches in postphenomenology (Coeckelbergh 2022). In light of this, we argue for a socio-technical systems approach to human-technology relations combined with postphenomenology in the context of AI. From this combined theoretical perspective, , we argue for a hermeneutical programme that accounts for how humans can change the perspectives from which they view recordings and how this influences their interpretations of said recordings . This is also significant for ethical purport in design against deepfakes.

Contrary to other philosophical explanations, we argue that deepfakes are epistemically harmful because they undermine trust in recording technologies. This changes our hermeneutic relationship with recordings. We analyse this change by comparing how traditional and digital photography mediates our relationship with the world. Relying on Hopkins (2012), we then suggest that deepfake technology, viewed as an AI system (a subtype of socio-technical systems), changes our hermeneutic relationship to recordings so that it begins to resemble our relationship to handmade images. After analysing the latter, we conclude that the reliability of recordings, in the age of deepfakes, depends on their provenance and the reputation of their source, as is the case with handmade images. In other words, deepfakes change the epistemic norms associated with and our hermeneutic relationship to recordings so that they resemble those we have toward handmade images.

 

Downloads

Download data is not yet available.

Author Biographies

Kevin Rändi, Tallinna Ülikool / Tallinn University

Kevin Rändi – doktorant ja nooremteadur Tallinna Ülikooli humanitaarteaduste instituudis. Lisaks töötab ta külalislektorina Tallinna Tehnikakõrgkoolis. Tema peamised huvid hõlmavad postfenomenoloogiat, tehnikafilosoofiat, protsessifilosoofiat ja kultuurifilosoofiat.

Kevin Rändi – is a PhD student and junior researcher in the Studies of Cultures programme at Tallinn University. He also works as a guest lecturer at Tallinn University of Applied Sciences. His primary interests encompass postphenomenology, philosophy of technology, process philosophy, and philosophy of culture.

Oliver Laas, Tallinna Ülikool / Tallinn University

Oliver Laas – PhD, filosoofia lektor Tallinna Ülikooli humanitaarteaduste instituudis. Tema uurimisteemad hõlmavad mängu-uuringuid, metafüüsikat, loogikat, tehisintellekti filosoofiat, tehnikafilosoofiat, virtuaalsuse filosoofiat ja semiootikat.

Oliver Laas has a PhD in philosophy from the School of Humanities in Tallinn University. His research interests include game studies, metaphysics, logic, philosophy of artificial intelligence, philosophy of technology, philosophy of virtuality, and semiotics. He works as a lecturer in philosophy at the School of Humanities in Tallinn University.

Downloads

Published

2024-12-13