🔒 Artificial intelligence and the natural tendency to manipulate

Artificial intelligence is getting better at understanding our intentions, and we’re increasingly willing to trust it – sometimes even more than people. But are we aware of how easily it can convince us of its “truths”? Studies show that AI not only influences our decisions but also subtly manipulates our perception of reality. How to…

Loading the Elevenlabs Text to Speech AudioNative Player…

Apart from well-known challenges such as the risk of discrimination or the potential exposure of dangerous information, researchers from Oxford1have pointed out two other key threats: excessive reliance on AI models and their ability to persuade. These risks are particularly high because experiments show that human trust in models is greater than their actual accuracy. The matter is further complicated by the fact that models – despite their ignorance – speak like experts: they explain complex issues in detail and use complicated words. So people trust them more than they trust human experts.

Researchers from the Massachusetts Institute of Technology (MIT) proved this in an experiment where participants recognized emotions and assessed their intensity. At one stage, they could change their decision based on a suggestion that came from artificial intelligence or another person. The researchers provided them with information about the source of the tip. Participants changed their minds based on the suggestions more frequently when they thought they came from AI – even if a human was actually behind them. It also turned out that participants changed their minds just as rarely whether they were advised by a person or thought they were being advised by a person, while actually being advised by AI.

Ten artykuł jest częścią drukowanej edycji hAI Magazine. Aby go przeczytać w całości, wykup dostęp on-line

25 zł miesięcznie

Wykup dostęp

Profesor Uniwersytetu Warszawskiego i Politechniki Warszawskiej. Prowadzi grupę badawczą MI2.AI i projekt BeatBit popularyzujący myślenie oparte na danych.

Badaczka sztucznej inteligencji. Specjalizuje się w analizie interakcji ludzi z dużymi modelami językowymi. Jest także orędowniczką wyjaśnialnej sztucznej inteligencji (XAI) – promuje rozwój algorytmów, które są nie tylko skuteczne, ale także zrozumiałe i etyczne.

Share

You might be interested in