{"id":10564,"date":"2025-03-31T10:00:00","date_gmt":"2025-03-31T08:00:00","guid":{"rendered":"https:\/\/haimagazine.com\/uncategorized\/deepfake-and-the-truth-crisis\/"},"modified":"2025-06-26T15:38:30","modified_gmt":"2025-06-26T13:38:30","slug":"deepfake-and-the-truth-crisis","status":"publish","type":"post","link":"https:\/\/haimagazine.com\/en\/hai-magazine-4\/deepfake-and-the-truth-crisis\/","title":{"rendered":"\ud83d\udd12 Deepfake and the truth crisis"},"content":{"rendered":"<p><strong>Dariusz Jemielniak: <\/strong>Today we&#8217;ll talk about the safety issues and potential threats related to AI. Because, darn it, safety seems like an ever bigger challenge, including technologically. <\/p><p><strong>Aleksandra Przegali\u0144ska:<\/strong> I thought a lot about this before we met here today. The topic that I&#8217;m most into right now is deepfakes, which are being used in increasingly stranger ways. Over the past two years, I&#8217;ve been watching these desperate profiles on Instagram that make fun of politicians in many countries \u2013 like transforming them in various ways, turning them into bodybuilders or elderly ladies, morphing some current president into a grandma, and so on. It&#8217;s a form of public, political commentary, and in that sense it&#8217;s probably allowed. Anyway, I don&#8217;t see any of these forums getting deleted. I&#8217;m curious though, whether seeing such a video amid others on their Instagram feed, everyone actually knows that it&#8217;s satire. For sure, anyone who wants to check can click on the account and read the small print caption. But considering the tremendous flood of content we&#8217;re dealing with, it&#8217;s easier to get lost. I think we&#8217;re having some kind of epistemological crisis \u2013 less and less clarity about what&#8217;s true and what&#8217;s not from what we see and hear.         <\/p><p>Unfortunately, there will only be more uncertainty. First of all, with the new administration in the USA, there are new rules of the game that let various platforms, like Meta (Facebook), publish all sorts of content specifically without filters and restrictions. Anyone can see it. For example, recently content started popping up suddenly, showing simulations of physical violence \u2013 and that&#8217;s no longer satire, but an attempt to comment, an attempt to deliberately disrupt the public debate. It&#8217;s also an incentive to use AI for such unethical purposes, to shock and attract attention with it.   <\/p><p>Also, I&#8217;d like to point out something that popped up on President Trump&#8217;s profile \u2013 a vision of Gaza in the future, a bizarre, scary vision generated by AI. I wonder whether Trump himself has even seen this clip because the video starts with actual ruins and debris, and then suddenly, a beautiful beach with a hotel complex unfolds before us and everyone is happy and enjoying themselves. But then some weird stuff happens \u2013 there&#8217;s Musk dancing, there are women with beards, and these Scheherazades dancing with Trump. It&#8217;s just absolutely psychedelic, wild, and honestly, it&#8217;s hard to tell if it&#8217;s even a form of satire anymore. But who is President Trump mocking here?    <\/p><p>The fact that something like this was made public by arguably the most powerful man in the world, the President of the USA, kind of validates and legitimizes deepfake communication, which talks about an alternate reality that doesn\u2019t actually exist but theoretically could. This isn&#8217;t just a way of commenting, snapping at politicians, or a method of political fight against an opponent anymore; it&#8217;s a way to build your own narrative and rewrite current history using deepfakes, shaping it the way you please. The real tipping point for me, a truly relevant event when it comes to deepfakes, is that someone gave the green light. Someone very high up in the power structure said: hey, let&#8217;s use this, whether it&#8217;s true or not, let&#8217;s use it, spread it around, without comment or awareness, just let it happen.   <\/p><p>And these are real network threats, I&#8217;m not even talking about hardcore cybersecurity and targeted attacks on infrastructures, but about the threat to public discourse. For a while now, people have been talking about the death of the internet \u2013 the one we used to know, where content was either good or bad, but made by humans. Instead, we have permission for AI-assisted &#8220;deepfaking&#8221; of social media platforms. And that, I think, might really be their true beginning of the end. It&#8217;s possible that a large percentage of users won&#8217;t want to put up with the excess of falsehood anymore, but at the same time, we might just adapt to this dumpster of information and start ignoring it completely.    <\/p><p><strong>DJ:<\/strong> First, let&#8217;s clarify what fake news is, because it&#8217;s a pretty precise term. It describes a situation where someone spreads false information, but the recipients agree that maybe the facts don&#8217;t line up, but they still think the bigger story kind of makes sense. So when we wrongfully accuse someone of theft, people think \u2013 okay, maybe they didn\u2019t steal, but they must be involved in some way (because why else would their name be linked to theft). And that&#8217;s a really dangerous phenomenon because in this sense, what Trump is doing is joining the trend of selling dreams, or rather sugarcoating tragedies. Of course, there will be displacements, everything will be leveled to the ground, but at the end of this apocalypse, a treasure awaits, and if we want that riviera with hotels, first we have to level the current homes and people to the ground.   <\/p><p><strong>AP:<\/strong> Exactly, those people aren&#8217;t there. What&#8217;s absolutely terrifying in this bizarre vision is that it isn&#8217;t rooted in the desires or dreams of any of the conflicting parties. It&#8217;s basically a vision of colonization by the USA, with a chauvinistic theme of belly dancers fawning over Trump and Musk. Both are having fun and eating on these beaches, but there are neither local leaders nor any people from there. So, a major risk in the context of disinformation, deepfakes and the bending of truth is that such a fake prophecy was not accompanied by any comment, explaining the intent or warning that it&#8217;s just someone&#8217;s drug-induced dream.    <\/p><p><strong>DJ:<\/strong> Totally agree, but that forecast of the internet&#8217;s death reminded me of what&#8217;s happening right now in the photography industry. In major photography contests, we&#8217;ve had separate categories for edited and original photos for years. Maybe the same awaits us with texts, traditional media, videos, and all published content. The issue is also kind of similar to doping.   <\/p><p>If someone uses doping in a sports competition, they get disqualified. However, any day now we might expect that artificial intelligence will give such a huge boost in all kinds of generative skills that using these tools will be treated like illegal doping. I really hope that we choose a different direction, though. That we learn the concept of Me + AI, where the synergy based on conscious and ethical use of artificial intelligence means that technology helps us, not replaces or disqualifies us. And this ethics and responsibility in the context of threats are extremely important, especially when we&#8217;re talking about deepfakes, because after all, some teenagers are already generating pornographic content featuring their female classmates (less often male) using readily available technology. That&#8217;s a terrible threat because it affects the mental health of young people. But it&#8217;s not just the young generations in the crosshairs \u2013 older people are too. The infamous \u201cgrandchild scam\u201d has become an even more sophisticated way of deceiving seniors \u2013 thanks to generative artificial intelligence, it&#8217;s possible to create an image and voice of the \u201cgrandchild\u201d that strikingly resembles the real one.      <\/p><p><strong>AP:<\/strong> We somehow kind of agree to this suspension of reality, unfortunately. You&#8217;re talking about kids generating explicit content, which is an extreme case, but I read somewhere recently that about 90% of kids in the UK use generative AI tools to do their homework. And this is what creating fiction looks like, if homework comes in such a format that it&#8217;s easy to generate, that kids have access to this technology, and no one at home checks it. This isn&#8217;t actually doing the assignment, just a simulation of doing it. So, zero benefit for the child, since not doing the task independently means they learn nothing, and on the other hand \u2013 since these tools are available, it would be good to find creative ways to use them, like you said. The potential is huge, but we need to go beyond just delegating tasks as a way to replace people \u2013 whether at work, at school or in private life.       <\/p><p>Recently I was listening with great interest to a podcast that was dedicated to the dating app Hinge, which now has an AI assistant. Its job is to boost your account by giving advice. It&#8217;s like a mentor, not the kind of AI where you tell it to create your account and it writes for you, but the kind that advises you on what you could write differently, better, etc.  <\/p><p>The show&#8217;s host asked an interesting question about the effects of introducing that option. So, everyone suddenly becomes someone completely different, everyone\u2019s super fancy, their photos are all neatly polished, and all the descriptions are anything but ordinary. On one hand, we&#8217;re raising the bar, making the content higher quality, cooler. But on the other hand, if someone wrote that they are boring, have no hobbies, or like pizza and traveling, then there was some truth about that person, which might not have gained them fans, but when someone else decided to get to know them based on (or despite) such a description, at least you knew what to expect and what not to. But now, you have the average Joe suddenly quoting Nabokov and posing in front of a trendy gallery.    <\/p><p>Even though nowadays a lot of fears about artificial intelligence are born, especially when it comes to its lack of control and excessive autonomy, the real threat is actually its epistemological aspect \u2013 this blurring of the truth. We didn&#8217;t create a tool that thinks, feels, and has a will of its own \u2013 just a tool that generates images, content and videos that were never made, written, or filmed. We&#8217;re increasingly letting this reality in, even at the highest levels of politics.  <\/p><p><strong>DJ:<\/strong> In my opinion, this leads to a really cool thing; good thing you mentioned that Hinge app. Soon, it&#8217;ll turn out that one AI will chat with another AI, meaning a user&#8217;s bot will talk to another&#8217;s bot, suggest who&#8217;s crushing on whom, and the only thing missing will be getting closer because AI can&#8217;t do that for us yet. <\/p><p><strong>AP:<\/strong> Picture this: your AI agent and my AI agent meet up to hash out, say, a disagreement about where we&#8217;d like to publish a book together. They talk it out, find common ground, and boom \u2013 the whole negotiation&#8217;s cut down to just fifteen minutes, because they sort out almost everything on our behalf, leaving only the bits where humans are needed (like the final check-up) to us.<\/p><p>But in my opinion, no one has figured out yet where to set the limit. Where at work is that spot where we feel we can ease up on our intervention and control without worrying about consequences, and enjoy a smooth representation? Where is the line beyond which our affairs happen completely without us? That&#8217;s risky. <\/p><p>The thing is, these agent models operate at a completely different speed. They can exchange in a short time like 3.5 million messages about the issue we were supposed to discuss. This discussion literally starts to take on a life of its own, happening somewhere beside us, and it would be tough to trace back everything that happened in it. It&#8217;s like coming back to the corporate inbox after a long vacation.  <\/p><p><strong>DJ:<\/strong> But it&#8217;s inevitable because the temptation to save time is just too great. I think it all boils down to this in the end. We&#8217;ll find out we got divorced or married someone from a notification in our calendar.  <\/p><p><strong>AP:<\/strong> There&#8217;s also this optimistic vision of synergy where we finally focus on what&#8217;s important, unleash our potential, while AI handles the less important stuff for us. Because if you delegate all possible tasks, including those related to personal life and relationships, sure, you&#8217;ll have plenty of time left, but for what? <\/p><p><strong>DJ:<\/strong> For the most important tasks, like peeling potatoes, because there&#8217;s no artificial intelligence for that.<\/p><p><strong>AP:<\/strong> I think this postulate (which is close to my heart and yours too), this idea of cooperation, Human+AI Collaboration, is all about putting the right emphasis on what can be delegated and what should remain in human hands. When I saw this vision of Gaza delegated to AI, I thought: &#8220;We have totally gone the wrong way&#8221;, because discussions about the future of the conflict and life after it ends are essential and should absolutely stay in human hands, not take on psychedelic forms of a generative model. <\/p><p><strong>DJ:<\/strong> What you&#8217;re saying is really important because we need to keep emphasizing that it&#8217;s crucial not to lose that oversight element. If I&#8217;m making my life easier but still have control over my &#8220;assistants&#8221; and their actions, then such cooperation can indeed yield powerful results, even a phenomenal breakthrough. But if I&#8217;m trying to replace my own decision-making, that&#8217;s problematic, and the example with kids is quite telling. If we learn from the start that AI isn&#8217;t meant to amplify our abilities but just to serve as a tool, it obviously leads to a completely stunted outcome. What&#8217;s more, it can lead to ridiculous situations like a bot setting up a date with another bot on our behalf. If models start replacing us too much, when we disappear from the Earth&#8230; will anyone notice? But we&#8217;d really like something to change in these conversations.     <\/p><p><strong>AP:<\/strong> This level of cooperation and decision delegation was also nicely covered in another episode of the podcast I mentioned. It talked about a woman who decided to follow the dictates of ChatGPT \u2013 it was supposed to plan her week, a daily schedule. When she went to the hairdresser, she went with a hairstyle planned by the model, for example. For the whole week, she and her family followed the model&#8217;s suggestions, including what to eat, how to spend free time, scheduling, and work outcomes. The woman simply decided to conduct this fascinating experiment and then shared her experiences.     <\/p><p>On one hand, of course, there are the expected comments about the immaturity of this technology \u2013 that in a few months or years the quality of these suggestions will be much better. But what was most interesting in this experience was the fact that she became not herself, sort of an everyman, neutered of her own character, her features, maybe even her flaws or quirks. &#8220;Basic bitch&#8221;, as she characterized herself, painfully average, and even if the recommended actions made her get noticed online or praised at work, she knew it wasn&#8217;t about her, not meant for her, but rather for the generic avatar of herself that she momentarily became.  <\/p><p>So, one thing about deepfakes is they alter reality, presenting an alternative narrative. This is close to that misinformation you were talking about \u2013 building a picture of a world that doesn&#8217;t exist, one that contradicts the facts and can serve political, propaganda purposes, etc. But the other issue is just about creating a reality that&#8217;s an averaged version of all narratives. When I get assignments from students that are AI-generated, they&#8217;re at most a C+, all very similar to each other. They lack their own unique touch, interesting input. Everything is correct, but not interesting, superficial.     <\/p><p><strong>DJ:<\/strong> Exactly! Bauman once talked about how when we view culture as a system, we miss the fact that what\u2019s most interesting in culture, in creativity, is innovation, neologisms, something unique, different from the standard, a creative &#8220;anomaly&#8221;. Maybe that&#8217;s how we should look at AI too. The more we use a certain averaged pattern that AI pushes on us, the safer we are because it&#8217;s a choice &#8220;insured&#8221; through averaging, but at the same time, we become cookie-cutter, bland \u2013 in this way, a whole army could arise, no longer so much of people, but human replicas of those same averaged patterns. So again \u2013 the real threat today isn&#8217;t that AI, in the form of a giant robotic monster, will come and eat us. It&#8217;s rather these little peculiarities that will eat us up, especially the way they&#8217;re averaged out and erased.     <\/p><p><strong>AP:<\/strong> Yes! That\u2019s why I want to add that I recently decided to only present my own voice on social media from now on. There were times when I would generate something and edit it. So even when I present some data, I ask the model for a little help with the visual side \u2013 adding flags and so on, just to make it look nice graphically, but I generally write the text in my own voice.   <\/p><p>I feel like, practically speaking, it also really clicks better, at least for now. I myself pay more attention to posts that I feel were written by a human \u2013 and I&#8217;m pretty good at sensing that. Even if these human posts are more chaotic, sometimes careless, they\u2019re still more interesting because they&#8217;re different, unique, personal.  <\/p>","protected":false},"excerpt":{"rendered":"<p>The threats of artificial intelligence.<\/p>\n","protected":false},"author":247,"featured_media":9677,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"rank_math_lock_modified_date":false,"footnotes":""},"categories":[783,673,781,674,784],"tags":[],"popular":[],"difficulty-level":[36],"ppma_author":[614,629],"class_list":["post-10564","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-ai-industry","category-hai-magazine-4","category-hai-premium","category-issue-4","category-security","difficulty-level-easy"],"acf":[],"authors":[{"term_id":614,"user_id":247,"is_guest":0,"slug":"prof-dr-hab-dariusz-jemielniak","display_name":"prof. dr hab. Dariusz Jemielniak","avatar_url":{"url":"https:\/\/haimagazine.com\/wp-content\/uploads\/2025\/03\/maxresdefault-1-e1742292469999.jpg","url2x":"https:\/\/haimagazine.com\/wp-content\/uploads\/2025\/03\/maxresdefault-1-e1742292469999.jpg"},"first_name":"Dariusz","last_name":"Jemielniak","user_url":"","job_title":"","description":"Profesor zarz\u0105dzania Akademii Leona Ko\u017ami\u0144skiego, gdzie kieruje katedr\u0105 MINDS (Management in Networked and Digital Societies). Pracuje te\u017c jako faculty associate w Berkman-Klein Center for Internet and Society na Harvardzie. Wiceprezes Polskiej Akademii Nauk. Cz\u0142onek Rady Programowej CampusAI."},{"term_id":629,"user_id":267,"is_guest":0,"slug":"prof-alk-aleksandra-przegalinska","display_name":"prof. ALK Aleksandra Przegali\u0144ska","avatar_url":{"url":"https:\/\/haimagazine.com\/wp-content\/uploads\/2025\/03\/aleksandraprzegalinska-1-2-e1742292801552.jpg","url2x":"https:\/\/haimagazine.com\/wp-content\/uploads\/2025\/03\/aleksandraprzegalinska-1-2-e1742292801552.jpg"},"first_name":"prof. ALK Aleksandra","last_name":"Przegali\u0144ska","user_url":"","job_title":"","description":"Filozofka, badaczka sztucznej inteligencji. Profesor Akademii Leona Ko\u017ami\u0144skiego i prorektorka ds. innowacji na tej uczelni. Badaczka Center for Labour and a Just Economy. Specjalizuje si\u0119 w interakcji cz\u0142owieka z AI. Jest autork\u0105 licznych publikacji dotycz\u0105cych etycznych i spo\u0142ecznych aspekt\u00f3w nowych technologii. Cz\u0142onkini Rady Programowej CampusAI."}],"_links":{"self":[{"href":"https:\/\/haimagazine.com\/en\/wp-json\/wp\/v2\/posts\/10564","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/haimagazine.com\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/haimagazine.com\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/haimagazine.com\/en\/wp-json\/wp\/v2\/users\/247"}],"replies":[{"embeddable":true,"href":"https:\/\/haimagazine.com\/en\/wp-json\/wp\/v2\/comments?post=10564"}],"version-history":[{"count":1,"href":"https:\/\/haimagazine.com\/en\/wp-json\/wp\/v2\/posts\/10564\/revisions"}],"predecessor-version":[{"id":10565,"href":"https:\/\/haimagazine.com\/en\/wp-json\/wp\/v2\/posts\/10564\/revisions\/10565"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/haimagazine.com\/en\/wp-json\/wp\/v2\/media\/9677"}],"wp:attachment":[{"href":"https:\/\/haimagazine.com\/en\/wp-json\/wp\/v2\/media?parent=10564"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/haimagazine.com\/en\/wp-json\/wp\/v2\/categories?post=10564"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/haimagazine.com\/en\/wp-json\/wp\/v2\/tags?post=10564"},{"taxonomy":"popular","embeddable":true,"href":"https:\/\/haimagazine.com\/en\/wp-json\/wp\/v2\/popular?post=10564"},{"taxonomy":"difficulty-level","embeddable":true,"href":"https:\/\/haimagazine.com\/en\/wp-json\/wp\/v2\/difficulty-level?post=10564"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/haimagazine.com\/en\/wp-json\/wp\/v2\/ppma_author?post=10564"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}