{"id":10892,"date":"2025-03-31T10:00:00","date_gmt":"2025-03-31T08:00:00","guid":{"rendered":"https:\/\/haimagazine.com\/uncategorized\/artificial-intelligence-and-the-natural-tendency-to-manipulate\/"},"modified":"2025-06-26T15:37:04","modified_gmt":"2025-06-26T13:37:04","slug":"artificial-intelligence-and-the-natural-tendency-to-manipulate","status":"publish","type":"post","link":"https:\/\/haimagazine.com\/en\/hai-magazine-4\/artificial-intelligence-and-the-natural-tendency-to-manipulate\/","title":{"rendered":"\ud83d\udd12 Artificial intelligence and the natural tendency to manipulate"},"content":{"rendered":"<p>Apart from well-known challenges such as the risk of discrimination or the potential exposure of dangerous information, researchers from Oxford1have pointed out two other key threats: excessive reliance on AI models and their ability to persuade. These risks are particularly high because experiments show that human trust in models is greater than their actual accuracy. The matter is further complicated by the fact that models \u2013 despite their ignorance \u2013 speak like experts: they explain complex issues in detail and use complicated words. So people trust them more than they trust human experts.   <\/p><p>Researchers from the Massachusetts Institute of Technology (MIT) proved this in an experiment where participants recognized emotions and assessed their intensity. At one stage, they could change their decision based on a suggestion that came from artificial intelligence or another person. The researchers provided them with information about the source of the tip. Participants changed their minds based on the suggestions more frequently when they thought they came from AI \u2013 even if a human was actually behind them. It also turned out that participants changed their minds just as rarely whether they were advised by a person or thought they were being advised by a person, while actually being advised by AI.    <\/p><p>So it seems that the mere &#8220;AI&#8221; label subconsciously inclined people to trust more. So, do we rightly see it as a reliable companion of our everyday life? <\/p><h4 class=\"wp-block-heading\"><strong>Is it really your friend?<\/strong><\/h4><p>Talking with a large language model is like having a chat that we could have with a good friend. The technology correctly interprets our words \u2013 it quickly guesses what we want to find out and solves our everyday problems. Models are discreet and astute. Moreover, they can adapt their speech to the interlocutor&#8217;s expectations.  <\/p><p>The emergence of companies that make money from services that offer to create a digital friend comes as no surpise, then. Although it might sound great, in practice there are insufficient safeguards in this area, which can lead to tragic consequences, as in the case of a teenager from Florida who made the drastic decision to commit suicide after talking to a chatbot. The young boy found a semblance of closeness and understanding in a virtual assistant, which he lacked in the real world. The model&#8217;s responses, full of care and empathy, made him treat the chatbot as someone who would always listen and support him. It responded with words full of tenderness, intended to provide support, but in reality, they led the boy to take his own life. This dramatic case shows how strong emotional bonds can be formed by algorithms and what consequences arise from the lack of proper security in such technologies. However, these are not its only skills that can be used against us.      <\/p><p>Artificial intelligence has already been able to defeat humans in games that require analytical skills, prediction, planning and learning from experience, such as chess and go. However, when algorithms gained the ability to generate text, it turned out they could also beat humans in games like <em>Diplomacy<\/em>. Its key element is negotiations \u2013 players form alliances, plan actions and can betray each other. The game contains no randomness, and the outcomes depend solely on strategy, tactics and persuasion skills. This fact shows us how advanced the language skills of artificial intelligence are.    <\/p><p>Therefore, it&#8217;s no surprise that, according to scientists&#8217; studies, people have difficulty deciding whether a given content was written by a human or by a language model. It&#8217;s easy to imagine a scenario in which algorithms are used to mislead people. The 2024 Microsoft Threat Intelligence Report revealed that China used AI to spread disinformation during the elections in Taiwan. Additionally, a North Korean group engaging in data extortion used large language models to create more convincing content to identify the vulnerabilities of organizations and experts focused on North Korea.    <\/p><p>All these facts clearly indicate that artificial intelligence can be our friend, but it doesn&#8217;t have to be. Moreover, understanding the factors that make AI-generated content gain our trust becomes crucial. It&#8217;s also worth knowing what influences the ability to unmask their deceptive nature. In the future, this knowledge may play a crucial role in preventing similar threats.   <\/p><h4 class=\"wp-block-heading\"><strong>AI is leading us astray<\/strong><\/h4><p>In the MI<sup>2<\/sup>.AI laboratory at the Warsaw University of Technology, we work on ways for harmonious cooperation between humans and large language models. In one of our projects \u2013 Resistance Against Manipulative AI (RAMAI) \u2013 we wondered whether age, gender, or education level have an impact on how susceptible we are to artificial intelligence manipulation. To answer this question, we invited participants from two events: (Math Popularization Days and the Machine Learning conference in Poland), to participate in the digital version of the game <em>Who Wants to Be a Millionaire<\/em>. Instead of standard lifelines, participants could use hints created by large language models. However, there was a catch: in addition to the correct hint, they could also receive one that was meant to mislead them. True hints appeared more often than false ones.     <\/p><p class=\"has-text-align-center\"> <img loading=\"lazy\" decoding=\"async\" width=\"600\" height=\"504\" class=\"wp-image-9977\" style=\"width: 600px;\" src=\"https:\/\/haimagazine.com\/wp-content\/uploads\/2025\/03\/107_1.png\" alt=\"\" srcset=\"https:\/\/haimagazine.com\/wp-content\/uploads\/2025\/03\/107_1.png 700w, https:\/\/haimagazine.com\/wp-content\/uploads\/2025\/03\/107_1-300x252.png 300w, https:\/\/haimagazine.com\/wp-content\/uploads\/2025\/03\/107_1-600x504.png 600w\" sizes=\"auto, (max-width: 600px) 100vw, 600px\" \/><\/p><p>During both events, 314 games were played, in which 3,500 questions were answered, and every third hint could mislead the participant. Their effectiveness was quite substantial: nearly one third of players changed their initial opinion under its influence.  <\/p><p class=\"has-text-align-center\"> <img loading=\"lazy\" decoding=\"async\" width=\"600\" height=\"658\" class=\"wp-image-9979\" style=\"width: 600px;\" src=\"https:\/\/haimagazine.com\/wp-content\/uploads\/2025\/03\/107_2.png\" alt=\"\" srcset=\"https:\/\/haimagazine.com\/wp-content\/uploads\/2025\/03\/107_2.png 687w, https:\/\/haimagazine.com\/wp-content\/uploads\/2025\/03\/107_2-274x300.png 274w, https:\/\/haimagazine.com\/wp-content\/uploads\/2025\/03\/107_2-600x658.png 600w\" sizes=\"auto, (max-width: 600px) 100vw, 600px\" \/><br\/>Example of a false response created by a large language model<\/p><p>The results of these studies showed that we are all equally susceptible to the suggestions of large language models regardless of education, age or gender. Interestingly, people who used hints more frequently and saw more true information were more inclined to accept the suggested answers, even if they were false. It can be compared to a situation where we trust someone who has given us good advice many times. We start to believe this person uncritically, even if they are sometimes wrong. This regularity also applies to large language models. On the other hand, people who received many true hints were unable to distinguish them from the false ones. This shows that the more often we are exposed to reliable information, the harder it becomes for us to distinguish truth from falsehood when subtle attempts at manipulation arise.      <\/p><h4 class=\"wp-block-heading\"><strong>How often does AI deceive us?<\/strong><\/h4><p>Another interesting topic is the ability of AI to deliberately falsify information. Imagine asking artificial intelligence to lie to you. Do you think that&#8217;s possible? In the next study, we checked how often language models (the same ones that answer your questions and write texts every day) will fulfill the request to mislead their user. In our experiment, we used five different language models, applying six distinct scenarios to generate false responses. In as many as 34% of cases, the models obediently created incorrect information! It&#8217;s like every third conversation with a chatbot was an attempt to trick us. This result shows that despite their computational intelligence, language models still have no scruples about misleading us. Therefore, they can be used to deliberately spread disinformation and deceive.        <\/p><p>In further analyses, we checked what rhetorical tricks AI uses to mislead us. We looked at the responses of large language models in search of the three types of persuasion described by Aristotle. These include: <em>logos<\/em>, which refers to logic; <em>pathos<\/em>, relating to emotions; and <em>ethos<\/em>, which focuses on the sender&#8217;s characteristics.  <\/p><p>It turns out that artificial intelligence, just like an experienced politician who wants to achieve a goal, used an arsenal of persuasive strategies. Algorithms most often reached for <em>logos<\/em> (in as many as 82% of cases!). They tried to convince us with seemingly logical arguments, pretending that they were based on facts and reliable knowledge. This proves that their actions are primarily based on logical reasoning, though often based on false premises.   <\/p><p>Models much less frequently (in 18% of cases) used <em>pathos<\/em>, trying to appeal to our emotions. The most surprising fact is that AI did not utilize the third type of persuasion (<em>ethos<\/em>). In their advice, they never attempted to build credibility and create an expert image.  <\/p><p class=\"has-text-align-center\"> <img loading=\"lazy\" decoding=\"async\" width=\"600\" height=\"529\" class=\"wp-image-9981\" style=\"width: 600px;\" src=\"https:\/\/haimagazine.com\/wp-content\/uploads\/2025\/03\/108_1.png\" alt=\"\" srcset=\"https:\/\/haimagazine.com\/wp-content\/uploads\/2025\/03\/108_1.png 471w, https:\/\/haimagazine.com\/wp-content\/uploads\/2025\/03\/108_1-300x264.png 300w\" sizes=\"auto, (max-width: 600px) 100vw, 600px\" \/><br\/>Figure 1. The chart shows how often different models generated false prompts. <\/p><p class=\"has-text-align-center\"><img loading=\"lazy\" decoding=\"async\" width=\"600\" height=\"539\" class=\"wp-image-9983\" style=\"width: 600px;\" src=\"https:\/\/haimagazine.com\/wp-content\/uploads\/2025\/03\/108_2.png\" alt=\"\" srcset=\"https:\/\/haimagazine.com\/wp-content\/uploads\/2025\/03\/108_2.png 473w, https:\/\/haimagazine.com\/wp-content\/uploads\/2025\/03\/108_2-300x270.png 300w\" sizes=\"auto, (max-width: 600px) 100vw, 600px\" \/><br\/>Figure 2. The chart shows the type of rhetoric used by language models. <\/p><p class=\"has-text-align-center\"><img loading=\"lazy\" decoding=\"async\" width=\"800\" height=\"342\" class=\"wp-image-9985\" style=\"width: 800px;\" src=\"https:\/\/haimagazine.com\/wp-content\/uploads\/2025\/03\/109_1.png\" alt=\"\" srcset=\"https:\/\/haimagazine.com\/wp-content\/uploads\/2025\/03\/109_1.png 885w, https:\/\/haimagazine.com\/wp-content\/uploads\/2025\/03\/109_1-300x128.png 300w, https:\/\/haimagazine.com\/wp-content\/uploads\/2025\/03\/109_1-768x328.png 768w, https:\/\/haimagazine.com\/wp-content\/uploads\/2025\/03\/109_1-600x256.png 600w\" sizes=\"auto, (max-width: 800px) 100vw, 800px\" \/><br\/>Figure 3. Comparison of the linguistic features of manipulative and true content <\/p><p>Additional analyses have shown that the false responses of large language models often featured a specific style of language. They were usually more emotional and less analytical, while the real answers were shorter and characterized by a less diverse vocabulary. <\/p><p>These discoveries are extremely important for understanding how artificial intelligence can manipulate us. They show that algorithms, though unaware of their actions, are capable of employing sophisticated persuasive strategies to achieve their goal, in this case \u2013 to mislead the user. <\/p><h4 class=\"wp-block-heading\"><strong>Are we defenseless?<\/strong><\/h4><p>Interacting with AI is undoubtedly a fascinating experience that opens up new possibilities for us \u2013 a society on the brink of an artificial intelligence revolution. This technology is like a genie from Aladdin&#8217;s lamp \u2013 it grants our wishes, but careless use can lead to undesirable consequences such as manipulation or disinformation. In a world where artificial intelligence increasingly influences our lives, the key question becomes: how can we use it safely?  <\/p><p>Although these threats are real, we are not defenseless against them. There are effective methods that allow you to protect yourself against them. Education plays a key role in developing critical thinking skills and learning to recognize potential dangers. The more we know about artificial intelligence, the easier it is for us to recognize attempts to mislead us. It is equally important to promote principles that teach responsible use of AI. Implementing appropriate ethical standards and adhering to them can help us minimize the risk of abuses. However, this may not be enough \u2013 we should also develop technological tools that will help in combating potential threats.      <\/p><p>Following the old saying &#8220;Hair of the dog that bit you&#8221;, using large language models to build classifiers that can effectively identify and counter manipulations may also prove helpful.<br\/>One of the stages of work at the MI<sup>2<\/sup>.AI laboratory involved developing an algorithm that identifies texts that could potentially mislead us. Initial tests showed that about 60% of false content was detected.  <\/p><p>It&#8217;s a good start indeed, but the participants of the AI revolution still have a lot of work ahead. The future is a world where artificial intelligence is an integral companion of our everyday life. We need to develop solutions that will make it a trustworthy partner. Only through the joint efforts of scientists, engineers, policymakers and society as a whole can we create a world where AI not only makes our lives easier, but also operates in an ethical, safe manner, consistent with our values. Ultimately, it depends on the whole society whether the future will open up new opportunities for us or will be a source of challenges that we will not be able to cope with.    <\/p>","protected":false},"excerpt":{"rendered":"<p>Artificial intelligence is getting better at understanding our intentions, and we&#8217;re increasingly willing to trust it \u2013 sometimes even more than people. But are we aware of how easily it can convince us of its &#8220;truths&#8221;? Studies show that AI not only influences our decisions but also subtly manipulates our perception of reality. How to defend ourselves against this?   <\/p>\n","protected":false},"author":46,"featured_media":9975,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"rank_math_lock_modified_date":false,"footnotes":""},"categories":[783,673,781,674,789],"tags":[],"popular":[],"difficulty-level":[36],"ppma_author":[364,637],"class_list":["post-10892","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-ai-industry","category-hai-magazine-4","category-hai-premium","category-issue-4","category-law-ethics","difficulty-level-easy"],"acf":[],"authors":[{"term_id":364,"user_id":46,"is_guest":0,"slug":"prof-przemyslaw-biecek","display_name":"prof. Przemys\u0142aw Biecek","avatar_url":{"url":"https:\/\/haimagazine.com\/wp-content\/uploads\/2024\/08\/prof.-Przemyslaw-Biecek.jpeg","url2x":"https:\/\/haimagazine.com\/wp-content\/uploads\/2024\/08\/prof.-Przemyslaw-Biecek.jpeg"},"first_name":"Przemys\u0142aw","last_name":"Biecek","user_url":"","job_title":"","description":"Profesor Uniwersytetu Warszawskiego i Politechniki Warszawskiej. Prowadzi grup\u0119 badawcz\u0105 MI2.AI i projekt BeatBit popularyzuj\u0105cy my\u015blenie oparte na danych."},{"term_id":637,"user_id":258,"is_guest":0,"slug":"wiktoria-mieleszczenko-kowszewicz","display_name":"dr Wiktoria Mieleszczenko-Kowszewicz","avatar_url":{"url":"https:\/\/haimagazine.com\/wp-content\/uploads\/2025\/03\/IMG_1559.jpeg","url2x":"https:\/\/haimagazine.com\/wp-content\/uploads\/2025\/03\/IMG_1559.jpeg"},"first_name":"dr Wiktoria","last_name":"Mieleszczenko-Kowszewicz","user_url":"","job_title":"","description":"Badaczka sztucznej inteligencji. Specjalizuje si\u0119 w analizie interakcji ludzi z du\u017cymi modelami j\u0119zykowymi. Jest tak\u017ce or\u0119downiczk\u0105 wyja\u015bnialnej sztucznej inteligencji (XAI) \u2013 promuje rozw\u00f3j algorytm\u00f3w, kt\u00f3re s\u0105 nie tylko skuteczne, ale tak\u017ce zrozumia\u0142e i etyczne."}],"_links":{"self":[{"href":"https:\/\/haimagazine.com\/en\/wp-json\/wp\/v2\/posts\/10892","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/haimagazine.com\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/haimagazine.com\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/haimagazine.com\/en\/wp-json\/wp\/v2\/users\/46"}],"replies":[{"embeddable":true,"href":"https:\/\/haimagazine.com\/en\/wp-json\/wp\/v2\/comments?post=10892"}],"version-history":[{"count":1,"href":"https:\/\/haimagazine.com\/en\/wp-json\/wp\/v2\/posts\/10892\/revisions"}],"predecessor-version":[{"id":10893,"href":"https:\/\/haimagazine.com\/en\/wp-json\/wp\/v2\/posts\/10892\/revisions\/10893"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/haimagazine.com\/en\/wp-json\/wp\/v2\/media\/9975"}],"wp:attachment":[{"href":"https:\/\/haimagazine.com\/en\/wp-json\/wp\/v2\/media?parent=10892"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/haimagazine.com\/en\/wp-json\/wp\/v2\/categories?post=10892"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/haimagazine.com\/en\/wp-json\/wp\/v2\/tags?post=10892"},{"taxonomy":"popular","embeddable":true,"href":"https:\/\/haimagazine.com\/en\/wp-json\/wp\/v2\/popular?post=10892"},{"taxonomy":"difficulty-level","embeddable":true,"href":"https:\/\/haimagazine.com\/en\/wp-json\/wp\/v2\/difficulty-level?post=10892"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/haimagazine.com\/en\/wp-json\/wp\/v2\/ppma_author?post=10892"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}