{"id":10535,"date":"2025-03-31T10:00:00","date_gmt":"2025-03-31T08:00:00","guid":{"rendered":"https:\/\/haimagazine.com\/uncategorized\/ai-under-scrutiny\/"},"modified":"2025-06-26T15:39:04","modified_gmt":"2025-06-26T13:39:04","slug":"ai-under-scrutiny","status":"publish","type":"post","link":"https:\/\/haimagazine.com\/en\/hai-magazine-4\/ai-under-scrutiny\/","title":{"rendered":"AI under scrutiny"},"content":{"rendered":"<p>With each new convenience or feature, the excitement about (generative) AI is growing, as well as the desire to transfer more of our lives into the technological reality. If we can have a source of knowledge, a virtual assistant, and a set of content editing tools in one easy and accessible solution, why wouldn&#8217;t we take advantage of it? The accessibility of AI solutions makes us more willing to entrust them with more areas of our lives, to give them more information about us and our environment. And that always entails some risk.    <\/p><p>As AI enthusiasts, we gladly embrace new technology because we want to make the most of what it has to offer us. As lawyers, we keep an even closer eye, because we see the risks that it poses. <\/p><p>Check out our subjective guide, from a lawyer&#8217;s point of view, about the changes that OpenAI has made over the past few months.<\/p><h4 class=\"wp-block-heading\"><strong>Image and sound<\/strong><\/h4><p>Let&#8217;s start by the video features and advanced voice processing in Advanced Voice Mode. As a reminder, Advanced Voice Mode in ChatGPT allows us to have natural voice conversations with an AI model. The new options, in turn, allow the assistant to analyze images as well. Just point the camera at an object, and ChatGPT will describe what we see and provide us with detailed information about it.   <\/p><p>It&#8217;s also worth noting that ChatGPT is getting better at generating a more natural voice, thanks to its &#8220;ability&#8221; to recognize tone and emotions. That&#8217;s why now the generated speech is closer to natural human speech. <\/p><p>We remember the groundbreaking change when SIRI learned to send messages about the content we provided. What if ChatGPT could remind us about scheduled meetings, send summaries of AI news, or monitor flight ticket prices? The latest feature of the ChatGPT bot is meant to bring it closer to the role of a virtual assistant. &#8220;Tasks&#8221; allow users to plan, organize and automate various activities simply and intuitively. Although just planning reminders isn&#8217;t revolutionary (every smartphone clock app offers this function), the interesting part seems to be the aspect of &#8220;taking action&#8221; at the planned time.    <\/p><h4 class=\"wp-block-heading\"><strong>User safety<\/strong><\/h4><p>The changes proposed by OpenAI also go beyond ChatGPT. In December 2024, OpenAI unveiled SORA, a model that creates videos based on text or images. The latest solution also allows expanding existing videos with new content. From our perspective, the most interesting thing is that OpenAI has added appropriate mechanisms at the technology level that protect against generating the likeness of real people (similar solutions can be found in the context of &#8220;voice assistance&#8221;). The model has a block at the input level that automatically rejects requests for generating content with the image of real people, and the resulting output \u2013 as part of the fight against deepfakes \u2013 has reduced quality on purpose.     <\/p><p>OpenAI tools are generative artificial intelligence that learns based on the provided data and, at the same time, can create new content \u2013 words, images, videos. In the learning process, it also uses the information that we introduce as input. The only limitation to what you can input is your imagination (and sometimes functionality), though &#8220;bad&#8221; queries can be filtered and blocked. The latest changes in OpenAI tools strongly focus on using, processing, and creating videos, which are materials that can have a significant impact on privacy issues, including personal data and image. This raises a question primarily about the safety and protection of users.    <\/p><p>Obviously, GenAI doesn&#8217;t operate in a legal vacuum. At the moment there are no laws regulating the use of this technology. However, after encountering the legal challenges associated with GenAI, many international institutions and organizations have created guidelines that limit such risks (though we prefer to call them &#8220;challenges&#8221; instead of &#8220;risks&#8221;). In any case, such guidelines are not absolutely binding (like the provisions of a law that &#8220;governs&#8221; us), and their application is voluntary or &#8220;almost&#8221; voluntary if some authority stands behind them. From the legal perspective, the development of GenAI brings about three main challenges: preserving creativity, security, and personal data protection.    <\/p><p>Let&#8217;s add another reservation \u2013 the legal and regulatory environment for enterprise solutions is different than what applies for consumers, which is something we should always keep in mind.<\/p><p>Where we use these tools and functions is also relevant for the security assessment. For instance, the protection level in the European Union and United States is very different. The fact that OpenAI doesn&#8217;t provide new features for users from the European Union responds mainly to EU regulations. Regulations like GDPR will be key in that respect. However, let&#8217;s not forget that many of these apps have been installed on servers outside the European Economic Area, so personal data may be transferred outside this area.     <\/p><p>The <strong>Terms of Use<\/strong> and <strong>Privacy Policy<\/strong> of OpenAI tools, which were updated at the beginning of last December, are crucial for establishing the user protection level. A positive change worth noting is the improvement in the readability of these documents. <\/p><h4 class=\"wp-block-heading\"><strong>Privacy and data<\/strong><\/h4><p>One of the biggest remaining challenges related to AI tool security is protecting user privacy and data. For example, using features like <strong>Tasks <\/strong>or <strong>Video <\/strong>in <strong>Advanced Voice Mode<\/strong> carries the risk of unintentionally sharing large amounts of personal data with OpenAI. Just a simple message like &#8220;Remind me about the meeting with John Smith, CEO of company XYZ, next Friday at 11:00&#8221; is enough for the system to process sensitive information, including names, positions, company names and dates.   <\/p><p>OpenAI&#8217;s Privacy Policy openly states that it collects the personal data provided by users, both the data provided when setting up an account and entered while using tools \u2013 whether it was shared as text, images, videos or sound. In addition, technical data such as <strong>system logs<\/strong>, <strong>location <\/strong>and <strong>device information<\/strong> is also collected. We should also highlight a positive change \u2013 the Privacy Policy has been supplemented with specific examples that make it easier for users to understand this process.   <\/p><p>What does OpenAI do with this data? In practice \u2013 a lot of things. It is used, among others, in the following areas:   <\/p><ul class=\"wp-block-list\"><li>Improving and developing services<\/li>\n\n<li>Preventing fraud and protecting system security<\/li>\n\n<li>Providing services to users<\/li><\/ul><p>What&#8217;s important is they are shared with third parties, e.g., companies providing services for OpenAI or affiliated entities.<\/p><p>As a result, after entering the data, users lose control over it, or at least the control is limited \u2013 this is a transparency issue that the AI Act is supposed to solve, at least to some extent.<\/p><h4 class=\"wp-block-heading\"><strong>Image protection<\/strong><\/h4><p>Another challenge associated with using AI tools, such as the Sora model, is image protection. According to European regulations, an image constitutes personal data that is subject to GDPR rules. Furthermore, in the regulations of individual EU member states, we can find detailed regulations on image management, treating it as a personal right. For example, in Poland, spreading someone&#8217;s image requires voluntary, conscious, specific and unambiguous consent. Exceptions include legal situations, like taking pictures of public figure in connection with their role or capturing an image in a public space.    <\/p><p>However, we should keep in mind that these exceptions don&#8217;t apply when distributing an image through artificial intelligence generative tools (GenAI). OpenAI&#8217;s Sora Usage Guidelines clearly prohibit creating videos that depict any person&#8217;s image without their explicit consent. This applies to both public figures and private individuals, excluding deceased persons. These guidelines also forbid generating content involving individuals under 18 years of age, even if they have agreed to it.<\/p><p>This is another way OpenAI wants to limit deepfakes that have spread online and caused a lot of reputational and legal damage. It&#8217;s also worth noting that while OpenAI&#8217;s general tools can be used by individuals over 13 years old (with parental consent for minors), Sora is intended only for adult users. The terms of use don&#8217;t include any exceptions for this rule.   <\/p><p>We would like to emphasize that the models behind these tools will also be subject to the AI Act.<\/p>","protected":false},"excerpt":{"rendered":"<p>What threats does artificial intelligence pose? To what extent does the law respond to OpenAI&#8217;s innovations? <\/p>\n","protected":false},"author":43,"featured_media":9762,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"rank_math_lock_modified_date":false,"footnotes":""},"categories":[783,673,781,674,789],"tags":[],"popular":[],"difficulty-level":[36],"ppma_author":[370,645],"class_list":["post-10535","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-ai-industry","category-hai-magazine-4","category-hai-premium","category-issue-4","category-law-ethics","difficulty-level-easy"],"acf":[],"authors":[{"term_id":370,"user_id":43,"is_guest":0,"slug":"dr-michal-nowakowski","display_name":"dr Micha\u0142 Nowakowski","avatar_url":{"url":"https:\/\/haimagazine.com\/wp-content\/uploads\/2024\/08\/Michal-Nowakowski.jpeg","url2x":"https:\/\/haimagazine.com\/wp-content\/uploads\/2024\/08\/Michal-Nowakowski.jpeg"},"first_name":"Micha\u0142","last_name":"Nowakowski","user_url":"","job_title":"","description":"Partner odpowiedzialny za AI &amp; CyberSec w ZP Zackiewicz &amp; Partners, CEO w GovernedAI. "},{"term_id":645,"user_id":266,"is_guest":0,"slug":"martyna-rzeczkowska","display_name":"Martyna Rzeczkowska","avatar_url":{"url":"https:\/\/haimagazine.com\/wp-content\/uploads\/2025\/03\/M.Rzeczkowska.jpg","url2x":"https:\/\/haimagazine.com\/wp-content\/uploads\/2025\/03\/M.Rzeczkowska.jpg"},"first_name":"Martyna","last_name":"Rzeczkowska","user_url":"","job_title":"","description":"Partner odpowiedzialny za AI &amp; CyberSec w ZP Zackiewicz &amp; Partners, CEO w GovernedAI. "}],"_links":{"self":[{"href":"https:\/\/haimagazine.com\/en\/wp-json\/wp\/v2\/posts\/10535","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/haimagazine.com\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/haimagazine.com\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/haimagazine.com\/en\/wp-json\/wp\/v2\/users\/43"}],"replies":[{"embeddable":true,"href":"https:\/\/haimagazine.com\/en\/wp-json\/wp\/v2\/comments?post=10535"}],"version-history":[{"count":2,"href":"https:\/\/haimagazine.com\/en\/wp-json\/wp\/v2\/posts\/10535\/revisions"}],"predecessor-version":[{"id":10543,"href":"https:\/\/haimagazine.com\/en\/wp-json\/wp\/v2\/posts\/10535\/revisions\/10543"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/haimagazine.com\/en\/wp-json\/wp\/v2\/media\/9762"}],"wp:attachment":[{"href":"https:\/\/haimagazine.com\/en\/wp-json\/wp\/v2\/media?parent=10535"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/haimagazine.com\/en\/wp-json\/wp\/v2\/categories?post=10535"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/haimagazine.com\/en\/wp-json\/wp\/v2\/tags?post=10535"},{"taxonomy":"popular","embeddable":true,"href":"https:\/\/haimagazine.com\/en\/wp-json\/wp\/v2\/popular?post=10535"},{"taxonomy":"difficulty-level","embeddable":true,"href":"https:\/\/haimagazine.com\/en\/wp-json\/wp\/v2\/difficulty-level?post=10535"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/haimagazine.com\/en\/wp-json\/wp\/v2\/ppma_author?post=10535"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}