AI under scrutiny

What threats does artificial intelligence pose? To what extent does the law respond to OpenAI’s innovations?

Loading the Elevenlabs Text to Speech AudioNative Player…

With each new convenience or feature, the excitement about (generative) AI is growing, as well as the desire to transfer more of our lives into the technological reality. If we can have a source of knowledge, a virtual assistant, and a set of content editing tools in one easy and accessible solution, why wouldn’t we take advantage of it? The accessibility of AI solutions makes us more willing to entrust them with more areas of our lives, to give them more information about us and our environment. And that always entails some risk.

As AI enthusiasts, we gladly embrace new technology because we want to make the most of what it has to offer us. As lawyers, we keep an even closer eye, because we see the risks that it poses.

Check out our subjective guide, from a lawyer’s point of view, about the changes that OpenAI has made over the past few months.

Image and sound

Let’s start by the video features and advanced voice processing in Advanced Voice Mode. As a reminder, Advanced Voice Mode in ChatGPT allows us to have natural voice conversations with an AI model. The new options, in turn, allow the assistant to analyze images as well. Just point the camera at an object, and ChatGPT will describe what we see and provide us with detailed information about it.

It’s also worth noting that ChatGPT is getting better at generating a more natural voice, thanks to its “ability” to recognize tone and emotions. That’s why now the generated speech is closer to natural human speech.

We remember the groundbreaking change when SIRI learned to send messages about the content we provided. What if ChatGPT could remind us about scheduled meetings, send summaries of AI news, or monitor flight ticket prices? The latest feature of the ChatGPT bot is meant to bring it closer to the role of a virtual assistant. “Tasks” allow users to plan, organize and automate various activities simply and intuitively. Although just planning reminders isn’t revolutionary (every smartphone clock app offers this function), the interesting part seems to be the aspect of “taking action” at the planned time.

User safety

The changes proposed by OpenAI also go beyond ChatGPT. In December 2024, OpenAI unveiled SORA, a model that creates videos based on text or images. The latest solution also allows expanding existing videos with new content. From our perspective, the most interesting thing is that OpenAI has added appropriate mechanisms at the technology level that protect against generating the likeness of real people (similar solutions can be found in the context of “voice assistance”). The model has a block at the input level that automatically rejects requests for generating content with the image of real people, and the resulting output – as part of the fight against deepfakes – has reduced quality on purpose.

OpenAI tools are generative artificial intelligence that learns based on the provided data and, at the same time, can create new content – words, images, videos. In the learning process, it also uses the information that we introduce as input. The only limitation to what you can input is your imagination (and sometimes functionality), though “bad” queries can be filtered and blocked. The latest changes in OpenAI tools strongly focus on using, processing, and creating videos, which are materials that can have a significant impact on privacy issues, including personal data and image. This raises a question primarily about the safety and protection of users.

Obviously, GenAI doesn’t operate in a legal vacuum. At the moment there are no laws regulating the use of this technology. However, after encountering the legal challenges associated with GenAI, many international institutions and organizations have created guidelines that limit such risks (though we prefer to call them “challenges” instead of “risks”). In any case, such guidelines are not absolutely binding (like the provisions of a law that “governs” us), and their application is voluntary or “almost” voluntary if some authority stands behind them. From the legal perspective, the development of GenAI brings about three main challenges: preserving creativity, security, and personal data protection.

Let’s add another reservation – the legal and regulatory environment for enterprise solutions is different than what applies for consumers, which is something we should always keep in mind.

Where we use these tools and functions is also relevant for the security assessment. For instance, the protection level in the European Union and United States is very different. The fact that OpenAI doesn’t provide new features for users from the European Union responds mainly to EU regulations. Regulations like GDPR will be key in that respect. However, let’s not forget that many of these apps have been installed on servers outside the European Economic Area, so personal data may be transferred outside this area.

The Terms of Use and Privacy Policy of OpenAI tools, which were updated at the beginning of last December, are crucial for establishing the user protection level. A positive change worth noting is the improvement in the readability of these documents.

Privacy and data

One of the biggest remaining challenges related to AI tool security is protecting user privacy and data. For example, using features like Tasks or Video in Advanced Voice Mode carries the risk of unintentionally sharing large amounts of personal data with OpenAI. Just a simple message like “Remind me about the meeting with John Smith, CEO of company XYZ, next Friday at 11:00” is enough for the system to process sensitive information, including names, positions, company names and dates.

OpenAI’s Privacy Policy openly states that it collects the personal data provided by users, both the data provided when setting up an account and entered while using tools – whether it was shared as text, images, videos or sound. In addition, technical data such as system logs, location and device information is also collected. We should also highlight a positive change – the Privacy Policy has been supplemented with specific examples that make it easier for users to understand this process.

What does OpenAI do with this data? In practice – a lot of things. It is used, among others, in the following areas:

  • Improving and developing services
  • Preventing fraud and protecting system security
  • Providing services to users

What’s important is they are shared with third parties, e.g., companies providing services for OpenAI or affiliated entities.

As a result, after entering the data, users lose control over it, or at least the control is limited – this is a transparency issue that the AI Act is supposed to solve, at least to some extent.

Image protection

Another challenge associated with using AI tools, such as the Sora model, is image protection. According to European regulations, an image constitutes personal data that is subject to GDPR rules. Furthermore, in the regulations of individual EU member states, we can find detailed regulations on image management, treating it as a personal right. For example, in Poland, spreading someone’s image requires voluntary, conscious, specific and unambiguous consent. Exceptions include legal situations, like taking pictures of public figure in connection with their role or capturing an image in a public space.

However, we should keep in mind that these exceptions don’t apply when distributing an image through artificial intelligence generative tools (GenAI). OpenAI’s Sora Usage Guidelines clearly prohibit creating videos that depict any person’s image without their explicit consent. This applies to both public figures and private individuals, excluding deceased persons. These guidelines also forbid generating content involving individuals under 18 years of age, even if they have agreed to it.

This is another way OpenAI wants to limit deepfakes that have spread online and caused a lot of reputational and legal damage. It’s also worth noting that while OpenAI’s general tools can be used by individuals over 13 years old (with parental consent for minors), Sora is intended only for adult users. The terms of use don’t include any exceptions for this rule.

We would like to emphasize that the models behind these tools will also be subject to the AI Act.

Partner odpowiedzialny za AI & CyberSec w ZP Zackiewicz & Partners, CEO w GovernedAI.

Partner odpowiedzialny za AI & CyberSec w ZP Zackiewicz & Partners, CEO w GovernedAI.

Share

You might be interested in