{"id":10877,"date":"2025-03-31T10:00:00","date_gmt":"2025-03-31T08:00:00","guid":{"rendered":"https:\/\/haimagazine.com\/uncategorized\/lucky-skywalker-safe-ai-systems-in-space\/"},"modified":"2025-06-26T15:37:51","modified_gmt":"2025-06-26T13:37:51","slug":"lucky-skywalker-safe-ai-systems-in-space","status":"publish","type":"post","link":"https:\/\/haimagazine.com\/en\/hai-magazine-4\/lucky-skywalker-safe-ai-systems-in-space\/","title":{"rendered":"\ud83d\udd12 Lucky Skywalker. Safe AI systems in space"},"content":{"rendered":"<p>Imagine living in the future where technology has advanced even further and space travel is not just a dream, but an everyday reality. On every spaceship, aside from the captain and pilots, there are scientists, mechanics and onboard system specialists. At the end of the day, someone has to keep an eye on things to make sure everything works smoothly: powerful engines, atmosphere-creating tools, and the widely used artificial intelligence that helps determine the correct flight trajectory of the spaceship, write an email or recognize the language spoken by newly encountered aliens.  <\/p><p>There are few things that haven&#8217;t changed over the past decades, but one thing certainly has \u2013 people still don&#8217;t fully understand how artificial intelligence thinks and where the decisions made by the model come from. However, everyone already realizes how crucial it is to keep AI systems safe. In the past, there were often dangerous situations where models were attacked by malicious agents. Recently, every spaceship must also have an AI safety engineer on board. No one really wants the alarm to go off again and all the models to start signaling anomaly detection in every possible system as if the ship was about to fall apart. There were situations where no one could communicate with chatbots managing the entire deck \u2013 they could only report a malfunction but didn&#8217;t explain what the exact problem was or what caused it.     <\/p><p>Returning to the present \u2013 to prevent such situations, not just in the space sector, we can take action now; we don&#8217;t have to wait for the first manned flight to Mars. We&#8217;re already working actively on the PINEBERRY project. which aims to raise awareness about the safety and explainability of AI models used in the space sector. The project is being carried out by researchers from MI2.AI at the Warsaw University of Technology and KP Labs on behalf of the European Space Agency (ESA).  <\/p><h4 class=\"wp-block-heading\"><strong>Potential risks<\/strong><\/h4><p>If you want to create a well-functioning AI system, it\u2019s better not to hide the implementation details hoping that this measure will do the trick (<em>security through obscurity<\/em>). Instead, you should invest in real security measures.<\/p><p>Right at the start of every project, you should ask yourself how to protect the system against potential <em>malicious agents<\/em>. It&#8217;s also worth understanding what the model does and why it gives you one response over another in a particular situation. We&#8217;ve included tips on how to answer the above questions in the catalogs created as part of the PINEBERRY project.  <\/p><p>In these documents, we explore the potential risks associated with using AI models in the space sector and also provide related examples. Malicious agents often only need access to the data that the model operates on to carry out an attack and influence the results it returns. There are many potential risks, such as data poisoning, data leakage, supply chain attack or overreliance.  <\/p><h4 class=\"wp-block-heading\"><strong>Data poisoning<\/strong><\/h4><p>Now imagine you&#8217;re flying in a spaceship and you ask an AI model for a weather forecast \u2013 you want to know if there will be any geomagnetic storms on your path any time soon. If there are, obviously you prefer to avoid them because they can negatively affect the ship&#8217;s electronic systems. But what if someone poisoned the training data for the model, causing it to malfunction and you walk right into the storm unprepared? Exactly, better not to take the risk. It&#8217;s better to protect yourself and thoroughly check the training data, for example by validating on a reference set or using anomaly detection methods. This way, you can check if the data you received doesn&#8217;t include any suspicious observations. Additionally, you can encrypt and secure your data storage, reducing the chance it gets corrupted. You can do this by simply limiting access.       <\/p><figure class=\"wp-block-image aligncenter size-full is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"579\" height=\"405\" src=\"https:\/\/haimagazine.com\/wp-content\/uploads\/2025\/03\/Zrzut-ekranu-2025-03-28-111442.png\" alt=\"\" class=\"wp-image-9718\" style=\"width:673px;height:auto\" srcset=\"https:\/\/haimagazine.com\/wp-content\/uploads\/2025\/03\/Zrzut-ekranu-2025-03-28-111442.png 579w, https:\/\/haimagazine.com\/wp-content\/uploads\/2025\/03\/Zrzut-ekranu-2025-03-28-111442-300x210.png 300w\" sizes=\"auto, (max-width: 579px) 100vw, 579px\" \/><figcaption class=\"wp-element-caption\">Source: https:\/\/zenodo.org\/records\/14762574 \/ https:\/\/zenodo.org\/records\/14699440<\/figcaption><\/figure><h4 class=\"wp-block-heading\"><strong>Data leakage<\/strong><\/h4><p>Every company has its secrets that should not be exposed to daylight, especially without the knowledge and consent of those responsible for them. This applies to large corporations, hospitals and space agencies alike. Data from missions often isn&#8217;t shared for many years, partly because competitors or hackers might use it.  <\/p><p>As you can imagine, the agency does collect data to use it, for example, to train models that will assist during the next mission, whether in performing tedious tasks or simply making the job easier for flight controllers. When an agency announces that its new solution has incredibly effective models and allows external people to query them, hackers try to cause data leakages. They want to use all the resources available to them. From there, it\u2019s a relatively straightforward path. With just a bit of luck, it&#8217;s possible to figure out by trial and error which predictions the model is more certain of (this data is more likely to be in the training set), or to construct a prompt for LLM models in such a way that they provide answers using secret data.   <\/p><p>A leak of patient data in a hospital would be an equally dangerous situation. No one wants their research results to end up on the internet, after all. Can this be prevented? There are many ways, including countering model overfitting, using various data anonymization techniques and transfer learning. This approach can reduce the amount of information available to the model that might be stolen.    <\/p><h4 class=\"wp-block-heading\"><strong>Supply chain attack<\/strong><\/h4><p>We often think that only data or models are at risk of suffering an attack. However, these are really just pieces of a bigger picture, which also includes a computer network, software and data storage systems.<br\/>If there&#8217;s an issue with any of them, the rest will be at risk too. As a developer, you&#8217;ve surely found yourself downloading pre-trained models from the internet, fine-tuning them on internal data, and enjoying quick success. How many times have you thought about checking if the downloaded model comes from a trusted source? And whether it contains additional harmful files that will install themselves on the computer, find the secret design of the latest rocket or unpublished research results on a breakthrough vaccine, and send that information out into the world? Checking the authenticity of the libraries or models you use doesn&#8217;t take much time and can save the entire project, not just in the space domain. Additionally, to enhance security, you can download data, models and libraries only from trusted sources, keep them updated regularly, and use secure formats for storing models.       <\/p><h4 class=\"wp-block-heading\"><strong>Overreliance<\/strong><\/h4><p>Sure, attacks are attacks, but after all, AI \u2013especially LLMs\u2013 always say things that sound very smart.<br\/>However, this doesn&#8217;t mean they are always wise. Blind faith in their infallibility can lead to consequences just as destructive as the aforementioned attacks. Imagine that you have to plan a space mission to Mars, which is a tough and tedious task, especially when it involves another round of revisions. You&#8217;re asking for help from the new, smart assistant with a built-in LLM, which has reportedly been performing flawlessly so far. Just after a few seconds, you have a ready plan with all the revisions included.     <\/p><p>But before you send the file with the plan to your manager, you remember that boring AI safety training, so you review the file just for peace of mind. And thank goodness, because the assistant isn&#8217;t as perfect as it seemed, and the tomato soup recipe shouldn&#8217;t have ended up in the middle of the instructions for maneuvering towards the red planet. So, you&#8217;re sending an email not to the manager, but to the people responsible for the new AI assistant. After all, they should fix it so that it always reminds you that you are the person responsible for the proposals generated by the model. And they should provide training sessions showing employees how to safely use AI.    <\/p><figure class=\"wp-block-image aligncenter size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"702\" height=\"234\" src=\"https:\/\/haimagazine.com\/wp-content\/uploads\/2025\/03\/Zrzut-ekranu-2025-03-28-111154.png\" alt=\"\" class=\"wp-image-9725\" srcset=\"https:\/\/haimagazine.com\/wp-content\/uploads\/2025\/03\/Zrzut-ekranu-2025-03-28-111154.png 702w, https:\/\/haimagazine.com\/wp-content\/uploads\/2025\/03\/Zrzut-ekranu-2025-03-28-111154-300x100.png 300w, https:\/\/haimagazine.com\/wp-content\/uploads\/2025\/03\/Zrzut-ekranu-2025-03-28-111154-600x200.png 600w\" sizes=\"auto, (max-width: 702px) 100vw, 702px\" \/><\/figure><h4 class=\"wp-block-heading\"><strong>So what now? Ready to fly? <\/strong><\/h4><p>In today&#8217;s world, it&#8217;s not enough to just build a well-functioning model; you also need to consider many other factors. One of them is the safety of individual models, as well as entire AI-based systems. It&#8217;s important to realize that there are more similar risks. The key to avoiding these kinds of issues is to actively counter potential security vulnerabilities in AI-supported systems.   <\/p><p>After all, nobody wants the model, which is the main part of the rocket management system, to change its mind during flight and lead to a dangerous situation. Projects like PINEBERRY serve an educational purpose and demonstrate how to safely use AI-based systems, including creating catalogs and organizing hackathons. <\/p><p>See you in (a safe) space!<\/p>","protected":false},"excerpt":{"rendered":"<p>Explainability and safety of artificial intelligence are the two main areas of the Polish PINEBERRY project, carried out for the European Space Agency. These are also two points worth checking off in your notebook when you&#8217;re heading to space. <\/p>\n","protected":false},"author":46,"featured_media":9723,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"rank_math_lock_modified_date":false,"footnotes":""},"categories":[783,673,781,674,784],"tags":[],"popular":[],"difficulty-level":[36],"ppma_author":[364,642],"class_list":["post-10877","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-ai-industry","category-hai-magazine-4","category-hai-premium","category-issue-4","category-security","difficulty-level-easy"],"acf":[],"authors":[{"term_id":364,"user_id":46,"is_guest":0,"slug":"prof-przemyslaw-biecek","display_name":"prof. Przemys\u0142aw Biecek","avatar_url":{"url":"https:\/\/haimagazine.com\/wp-content\/uploads\/2024\/08\/prof.-Przemyslaw-Biecek.jpeg","url2x":"https:\/\/haimagazine.com\/wp-content\/uploads\/2024\/08\/prof.-Przemyslaw-Biecek.jpeg"},"first_name":"Przemys\u0142aw","last_name":"Biecek","user_url":"","job_title":"","description":"Profesor Uniwersytetu Warszawskiego i Politechniki Warszawskiej. Prowadzi grup\u0119 badawcz\u0105 MI2.AI i projekt BeatBit popularyzuj\u0105cy my\u015blenie oparte na danych."},{"term_id":642,"user_id":263,"is_guest":0,"slug":"agata-kaczmarek","display_name":"Agata Kaczmarek","avatar_url":{"url":"https:\/\/haimagazine.com\/wp-content\/uploads\/2025\/03\/agata_kaczmarek-scaled.jpg","url2x":"https:\/\/haimagazine.com\/wp-content\/uploads\/2025\/03\/agata_kaczmarek-scaled.jpg"},"first_name":"Agata","last_name":"Kaczmarek","user_url":"","job_title":"","description":"Research Software Engineer w projekcie PINEBERRY realizowanym przez grup\u0119 badawcz\u0105 MI2.AI. absolwentka data science na Politechnice Warszawskiej."}],"_links":{"self":[{"href":"https:\/\/haimagazine.com\/en\/wp-json\/wp\/v2\/posts\/10877","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/haimagazine.com\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/haimagazine.com\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/haimagazine.com\/en\/wp-json\/wp\/v2\/users\/46"}],"replies":[{"embeddable":true,"href":"https:\/\/haimagazine.com\/en\/wp-json\/wp\/v2\/comments?post=10877"}],"version-history":[{"count":1,"href":"https:\/\/haimagazine.com\/en\/wp-json\/wp\/v2\/posts\/10877\/revisions"}],"predecessor-version":[{"id":10878,"href":"https:\/\/haimagazine.com\/en\/wp-json\/wp\/v2\/posts\/10877\/revisions\/10878"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/haimagazine.com\/en\/wp-json\/wp\/v2\/media\/9723"}],"wp:attachment":[{"href":"https:\/\/haimagazine.com\/en\/wp-json\/wp\/v2\/media?parent=10877"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/haimagazine.com\/en\/wp-json\/wp\/v2\/categories?post=10877"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/haimagazine.com\/en\/wp-json\/wp\/v2\/tags?post=10877"},{"taxonomy":"popular","embeddable":true,"href":"https:\/\/haimagazine.com\/en\/wp-json\/wp\/v2\/popular?post=10877"},{"taxonomy":"difficulty-level","embeddable":true,"href":"https:\/\/haimagazine.com\/en\/wp-json\/wp\/v2\/difficulty-level?post=10877"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/haimagazine.com\/en\/wp-json\/wp\/v2\/ppma_author?post=10877"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}