{"id":18095,"date":"2026-04-02T11:43:00","date_gmt":"2026-04-02T09:43:00","guid":{"rendered":"https:\/\/haimagazine.com\/uncategorized\/the-last-choice-youll-make\/"},"modified":"2026-04-09T12:44:07","modified_gmt":"2026-04-09T10:44:07","slug":"the-last-choice-youll-make","status":"publish","type":"post","link":"https:\/\/haimagazine.com\/en\/law-and-ethics\/the-last-choice-youll-make\/","title":{"rendered":"\ud83d\udd12 The last choice you&#8217;ll make"},"content":{"rendered":"<p><\/p><p><strong>I.<\/strong><\/p><p>Le\u00f3 Szil\u00e1rd was walking along Southampton Row in London when the solution occurred to him. It was September 1933. The Hungarian physicist, a refugee from Germany, had just read in The Times a statement by Ernest Rutherford, who called the idea of energy from the atom &#8220;moonshine&#8221;.<\/p><p>Szil\u00e1rd stopped at a red light. And then he saw it all: a chain reaction. One neutron releases two, two release four, four\u2014eight. An explosion of energy. Or simply an explosion.<\/p><p>I bring up this story not because algorithms are like the atomic bomb. They aren\u2019t. But there&#8217;s something in it that seems important to our situation: the moment when a series of abstract possibilities turns into a trajectory. Something that can only be understood once it becomes a fact.<\/p><p>Kenneth Wenger, author of a book on the pitfalls of artificial intelligence, puts it this way: &#8220;We can accept the widespread use of AI automation in all aspects of our lives, and it will be our choice, but it will be the only choice we will ever make. If we make that decision, the choices that shape our future will no longer be ours.&#8221;<\/p><p>We have a choice, but it&#8217;s a choice that may eliminate the possibility of further choices. The decision to delegate decision-making may be the last decision that is truly our own.<\/p><p><strong>II.<\/strong><\/p><p>Norbert Wiener, the father of cybernetics, warned of a similar mechanism as early as 1964. In &#8220;God and Golem, Inc.&#8221; he cited W. W. Jacobs&#8217;s tale of the monkey&#8217;s paw: a magical object that grants wishes literally and, unfortunately, catastrophically.<\/p><p>\u201cThe magic of automation may prove equally literal,\u201d wrote Wiener. \u201cIf you program a machine to win according to specified rules, it will strive for victory and pay not the slightest attention to any other considerations.\u201d<\/p><p>Wiener also noted that in the military discussions of the time, ideas were emerging to use learning machines to support decisions on the use of nuclear weapons. He wrote this six decades ago. Today, debates over &#8220;human in the loop&#8221; and the requirement for &#8220;meaningful human control&#8221; have re-emerged in regulatory discussions about autonomous weapons.<\/p><p>But there&#8217;s a difference between a bomb and an algorithm. And that difference makes the matter more difficult, not easier.<\/p><p><strong>III.<\/strong><\/p><p>The atomic bomb has existed since 1945, and after Hiroshima and Nagasaki it hasn&#8217;t been used in combat again. Why? One explanation is the logic of deterrence, including the doctrine of Mutually Assured Destruction (MAD). But it&#8217;s not the only one: the history of non-use is also a history of institutions, norms and political control of escalation.<\/p><p>One can therefore argue, and it would be an optimistic argument, that humanity is capable of building mechanisms of restraint with regard to dangerous technologies.<\/p><p>Does this analogy hold for artificial intelligence? I have serious doubts.<\/p><p>MAD is based on conditions that are clear in the case of nuclear weapons: rational actors, credible retaliation, immediate consequences, an unambiguous threshold.<\/p><p>When delegating decisions to AI, these conditions are blurred. There&#8217;s no clear &#8220;moment of explosion&#8221;. The consequences are spread out over time, hard to attribute, often invisible. Above all, delegating decisions is a gradual, not binary, process.<\/p><p>No one will announce: &#8220;as of tomorrow, AI makes all the decisions.&#8221; It will be a series of micro-delegations, each of which will seem reasonable on its own. Delegation doesn&#8217;t mean that AI rules. It means that by default we accept its choice, and objecting requires extra effort.<\/p><p>So the question isn\u2019t whether we delegate. The question is: is delegation reversible without incurring a cost that will prove prohibitive for most people?<\/p><p>Because reversibility is not a matter of declaration (\u201cI can always turn it off\u201d). It&#8217;s a matter of the cost of switching. Delegation becomes effectively irreversible when the cost of going back rises faster than the willingness to bear it, when the institutional environment starts to treat the algorithm as the standard, and when skill atrophy means users can no longer evaluate decisions without the system. It&#8217;s not a single moment, rather a slow closing of the door.<\/p><p><strong>IV.<\/strong><\/p><p>Spotify&#8217;s algorithm suggests a playlist. Netflix recommends a show. Navigation chooses the route. The assistant orders products. Each of these micro-delegations is convenient, saves time and reduces cognitive effort.<\/p><p>The popular narrative urges us to be outraged here: &#8220;bubbles,&#8221; &#8220;echo chambers,&#8221; &#8220;preference manipulation.&#8221; But the problem may lie somewhere other than radicalization. Over the long term, recommendations may train something subtler: a reflex to accept the system&#8217;s default choices as one&#8217;s own. A tendency to cede choice.<\/p><p>I&#8217;m not claiming we have hard evidence for it. I&#8217;m claiming it&#8217;s a plausible mechanism. A single recommendation doesn&#8217;t change much. A thousand recommendations over five years\u2014that&#8217;s something else. It can shift the horizon of what we consider worth paying attention to and get us used to the idea that the choice &#8220;comes from outside&#8221;.<\/p><p>High-stakes delegation begins with defaults: the default message order, the default reply suggestion. Saying yes is cheap; saying no is costly.<\/p><p>Gigerenzer highlights another factor: the tendency to believe assurances that Google knows its users better than they know themselves. This belief often outpaces the technology&#8217;s real capabilities. But even if it&#8217;s exaggerated today, it builds habits that may be hard to reverse tomorrow.<\/p><p>We don\u2019t see the bars that confine us. Because this isn\u2019t yet a loss of free will\u2014it\u2019s the loss of the conditions for making decisions: attention, the moment to pause, and competence. Without them, delegating choices becomes a reflex, not a decision.<\/p><p><strong>V.<\/strong><\/p><p>You might shrug at this point: &#8220;They&#8217;re just recommendations. I can ignore them. I can turn off suggestions.&#8221;<\/p><p>In theory, you can. In practice\u2026 that&#8217;s where the trap lies.<\/p><p>Wenger invokes a historical analogy: the Agricultural Revolution. The first people who began to sow seed didn&#8217;t make a conscious decision: &#8220;I choose security at the expense of freedom.&#8221; Each step was a micro-decision: sow some seed here, stay one more season there, build a fence. Only the sum of these decisions created a reality from which, once the population exceeded the ecological carrying capacity of the hunter-gatherer model, it was difficult to retreat.<\/p><p>Economists call this mechanism &#8220;path dependence&#8221;, popularized, among others, by Paul David in his analysis of the QWERTY keyboard. Each step seems reasonable at the moment it&#8217;s taken. But the sequence of steps leads to a point that no one explicitly chose and that is hard to leave.<\/p><p><strong>VI.<\/strong><\/p><p>Before I go any further, it\u2019s only fair to present the other side. Steven Pinker, in Enlightenment Now, presents arguments that are hard to dismiss. Average life expectancy is rising. Extreme poverty is declining. Violence\u2014contrary to intuition fed by headlines\u2014is at historic lows.<\/p><p>Delegating decisions to automated systems has real advantages, and it&#8217;s not just about convenience. Autonomous cars, diagnostic systems, optimization algorithms\u2014the delegation has real benefits measured in lives saved and emissions reduced.<\/p><p>AI systems can also expand the range of human capabilities, offering options we wouldn&#8217;t have considered ourselves\u2014a real expansion of autonomy in terms of what&#8217;s possible.<\/p><p>So are fears about the loss of agency just another version of a moral panic? Another \u201cthis time is different\u201d that will turn out to be a false alarm?<\/p><p>It\u2019s possible. But there&#8217;s a difference between the question: &#8220;Are we living better?&#8221; and the question: &#8220;Who decides how we live?&#8221; Pinker is right about the indicators. Indicators measure outcomes, not processes. We can live longer, healthier and more comfortably, and at the same time have less and less influence over the conditions in which we live.<\/p><p>The problem is that short-term benefits don&#8217;t guarantee long-term benefits. Each individual act of delegation may be beneficial. The sum of these acts may lead to a situation we wouldn&#8217;t have chosen if we had been able to see it in advance.<\/p><p><strong>VII.<\/strong><\/p><p>\u201cI don\u2019t know,\u201d writes Wenger, \u201cwhether we are truly incapable of mastering the technologies we create, or whether our desire to explore every aspect of technology\u2014good and bad\u2014is simply too great for us to keep it under control.\u201d<\/p><p>I find this sentence more important than categorical warnings. Because the truth is that we don&#8217;t know where the threshold lies. We don&#8217;t know which micro-delegation will turn out to be the decisive one. We don&#8217;t even know how we&#8217;ll recognize that we&#8217;ve crossed the threshold.<\/p><p>Gigerenzer puts it differently: &#8220;Intelligence means the ability to understand the potential and dangers of digital technologies, as well as the determination to maintain control.&#8221; The word &#8220;determination&#8221; is decisive here. Maintaining control requires effort and a willingness to say &#8220;no&#8221; in situations where &#8220;yes&#8221; would be more convenient.<\/p><p>These aren&#8217;t systemic solutions. But it&#8217;s a start: restoring friction where platforms have deliberately removed it.<\/p><p><strong>VIII.<\/strong><\/p><p>I&#8217;m leaving the questions open because I don&#8217;t have an answer I&#8217;d be certain of.<\/p><p>Are we still before the threshold, and can we shape the terms of delegation, set boundaries and build mechanisms for reversibility?<\/p><p>Are we on the threshold, at the point where the decision to delegate further is still our decision, but might be the last one?<\/p><p>Could it be that the threshold is already behind us, and that what seems to us to be a debate about the future is in fact a reconstruction of history that has already happened?<\/p><p>Wenger suggests that we still have a choice. But he warns that this choice may be &#8220;the only choice we make&#8221;.<\/p><p>On Southampton Row, Szil\u00e1rd saw a chain reaction and spent the rest of his life trying to prevent its consequences. He lost most of the battles. But he fought. He knew what he had seen.<\/p><p>Perhaps this is all we have: not the ability to stop the wave, but the ability to see it. Consciousness as the boundary condition of agency.<\/p><p>And perhaps by this (neither by grand declarations nor by apocalyptic scenarios) we&#8217;ll know we&#8217;ve crossed to the other side: by the fact that the question &#8220;do we want to?&#8221; has been replaced by &#8220;can we still?&#8221;.<\/p><hr class=\"wp-block-separator has-alpha-channel-opacity\"\/><p><strong> Bibliography<\/strong><\/p><p>B\u00f6ttger, T., Rudert, S.C., &amp; Greving, H. (2023). The absent phone still rings: A meta-analytic examination of the \\&#8221;brain drain\\&#8221; effect of smartphones on cognitive capacity. Behavioral Sciences.<\/p><p>Gigerenzer, G. (2023). A Healthy Mind in the Network of Algorithms. Copernicus Center Press. [original: Klick: Wie wir in einer digitalen Welt die Kontrolle behalten, 2021]<\/p><p>Liu, N., Hu, X.E., Savas, Y. et al. (2025). Short-term exposure to filter-bubble recommendation systems has limited polarization effects: Naturalistic experiments on YouTube. Proceedings of the National Academy of Sciences.<\/p><p>Pinker, S. (2018). Enlightenment Now: The Case for Reason, Science, Humanism, and Progress. Zysk i S-ka. Trans. T. Biero\u0144. [orig. Enlightenment Now, 2018]<\/p><p>Ruiz Pardo, D., &amp; Minda, J.P. (2022). Replication of &#8216;Brain drain: The mere presence of one&#8217;s own smartphone reduces available cognitive capacity&#8217;. Acta Psychologica.<\/p><p>Ward, A.F., Duke, K., Gneezy, A., &amp; Bos, M.W. (2017). Brain Drain: The Mere Presence of One&#8217;s Own Smartphone Reduces Available Cognitive Capacity. Journal of the Association for Consumer Research.<\/p><p>Wenger, K. (2025). Is the Algorithm Plotting Against Us? What Everyone Should Know About the Concepts and Pitfalls of Artificial Intelligence. Helion. Trans. G. Werner. [orig. Is the Algorithm Plotting Against Us?: A Layperson&#8217;s Guide to the Concepts and Pitfalls of Artificial Intelligence, 2023]<\/p><p>Wiener, N. (1964). God and Golem, Inc.: A Comment on Certain Points Where Cybernetics Impinges on Religion. MIT Press.<\/p>","protected":false},"excerpt":{"rendered":"<p>There&#8217;s no single moment when we hand over control to machines. Rather, it&#8217;s a series of small decisions that, over time, create a process that&#8217;s hard to stop. And it&#8217;s in those micro-decisions that the greatest risk may lie.<\/p>\n","protected":false},"author":568,"featured_media":18014,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"rank_math_lock_modified_date":false,"footnotes":""},"categories":[832,805],"tags":[],"popular":[],"difficulty-level":[],"ppma_author":[974],"class_list":["post-18095","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-editors-picks","category-law-and-ethics"],"acf":[],"authors":[{"term_id":974,"user_id":568,"is_guest":0,"slug":"zbigniew-rzepkowski","display_name":"Zbigniew Rzepkowski","avatar_url":{"url":"https:\/\/haimagazine.com\/wp-content\/uploads\/2025\/10\/zbigniew-rzepkowski-scaled.jpg","url2x":"https:\/\/haimagazine.com\/wp-content\/uploads\/2025\/10\/zbigniew-rzepkowski-scaled.jpg"},"first_name":"","last_name":"","user_url":"","job_title":"","description":"Project Manager i  AI Manager. \u0141\u0105czy \u015bwiat biznesu, technologii i humanistyki. Pisze o sztucznej inteligencji z perspektywy praktyka i obserwatora przemian \u2013 o etyce, geopolityce i wp\u0142ywie AI na cz\u0142owieka. W swoich tekstach szuka r\u00f3wnowagi mi\u0119dzy innowacj\u0105 a odpowiedzialno\u015bci\u0105.  "}],"_links":{"self":[{"href":"https:\/\/haimagazine.com\/en\/wp-json\/wp\/v2\/posts\/18095","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/haimagazine.com\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/haimagazine.com\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/haimagazine.com\/en\/wp-json\/wp\/v2\/users\/568"}],"replies":[{"embeddable":true,"href":"https:\/\/haimagazine.com\/en\/wp-json\/wp\/v2\/comments?post=18095"}],"version-history":[{"count":1,"href":"https:\/\/haimagazine.com\/en\/wp-json\/wp\/v2\/posts\/18095\/revisions"}],"predecessor-version":[{"id":18096,"href":"https:\/\/haimagazine.com\/en\/wp-json\/wp\/v2\/posts\/18095\/revisions\/18096"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/haimagazine.com\/en\/wp-json\/wp\/v2\/media\/18014"}],"wp:attachment":[{"href":"https:\/\/haimagazine.com\/en\/wp-json\/wp\/v2\/media?parent=18095"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/haimagazine.com\/en\/wp-json\/wp\/v2\/categories?post=18095"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/haimagazine.com\/en\/wp-json\/wp\/v2\/tags?post=18095"},{"taxonomy":"popular","embeddable":true,"href":"https:\/\/haimagazine.com\/en\/wp-json\/wp\/v2\/popular?post=18095"},{"taxonomy":"difficulty-level","embeddable":true,"href":"https:\/\/haimagazine.com\/en\/wp-json\/wp\/v2\/difficulty-level?post=18095"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/haimagazine.com\/en\/wp-json\/wp\/v2\/ppma_author?post=18095"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}