🔒 Understanding AI: a safety game

Loading the Elevenlabs Text to Speech AudioNative Player…Inez Okulska: Safety in the context of artificial intelligence is discussed in every way possible. Both big players and beginner enthusiasts are increasingly feeling that these two concepts, though not always easy to reconcile (see: massive layoffs in big tech right in this area), must go hand in…

Loading the Elevenlabs Text to Speech AudioNative Player…

Inez Okulska: Safety in the context of artificial intelligence is discussed in every way possible. Both big players and beginner enthusiasts are increasingly feeling that these two concepts, though not always easy to reconcile (see: massive layoffs in big tech right in this area), must go hand in hand. Is risk just a flaw of bad models, or an intrinsic aspect to this technology? What does “AI safety” actually mean for business and everyday life?

Przemysław Biecek: Paraphrasing the Anna Karenina principle: all good models are alike, each bad model is bad in its own way. This saying really holds up when analyzing the safety of artificial intelligence models.
Unbiased, secure, trusted, robust, transparent, verified – these are just some examples of secure AI. We expect a whole bunch of desired features from a safe model, and failing even one of them means we consider the model defective, and sometimes even dangerous. The word “safe” here is an umbrella for many criteria we want it to meet.

Ten artykuł jest częścią drukowanej edycji hAI Magazine. Aby go przeczytać w całości, wykup dostęp on-line

25 zł miesięcznie

Wykup dostęp

Redaktor naczelna hAI Magazine, badaczka i współautorka modeli AI (StyloMetrix, PLLuM), wykładowczyni, Top100 Woman in AI in PL

Profesor Uniwersytetu Warszawskiego i Politechniki Warszawskiej. Prowadzi grupę badawczą MI2.AI i projekt BeatBit popularyzujący myślenie oparte na danych.

Share

You might be interested in