Inez Okulska: Safety in the context of artificial intelligence is discussed in every way possible. Both big players and beginner enthusiasts are increasingly feeling that these two concepts, though not always easy to reconcile (see: massive layoffs in big tech right in this area), must go hand in hand. Is risk just a flaw of bad models, or an intrinsic aspect to this technology? What does “AI safety” actually mean for business and everyday life?
Przemysław Biecek: Paraphrasing the Anna Karenina principle: all good models are alike, each bad model is bad in its own way. This saying really holds up when analyzing the safety of artificial intelligence models.
Unbiased, secure, trusted, robust, transparent, verified – these are just some examples of secure AI. We expect a whole bunch of desired features from a safe model, and failing even one of them means we consider the model defective, and sometimes even dangerous. The word “safe” here is an umbrella for many criteria we want it to meet.