{"id":16156,"date":"2025-11-04T15:50:18","date_gmt":"2025-11-04T14:50:18","guid":{"rendered":"https:\/\/haimagazine.com\/uncategorized\/ai-in-medicine-global-models-local-challenges\/"},"modified":"2025-11-17T14:20:18","modified_gmt":"2025-11-17T13:20:18","slug":"ai-in-medicine-global-models-local-challenges","status":"publish","type":"post","link":"https:\/\/haimagazine.com\/en\/health-and-medicine\/ai-in-medicine-global-models-local-challenges\/","title":{"rendered":"\ud83d\udd12 AI in medicine \u2013 global models, local challenges"},"content":{"rendered":"<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-28f84493 wp-block-columns-is-layout-flex\"><div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:33.33%\"><figure class=\"wp-block-image size-full is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"704\" height=\"642\" src=\"https:\/\/haimagazine.com\/wp-content\/uploads\/2025\/11\/Michal_Maciejewski.jpeg\" alt=\"\" class=\"wp-image-15930\" style=\"width:314px;height:auto\" srcset=\"https:\/\/haimagazine.com\/wp-content\/uploads\/2025\/11\/Michal_Maciejewski.jpeg 704w, https:\/\/haimagazine.com\/wp-content\/uploads\/2025\/11\/Michal_Maciejewski-300x274.jpeg 300w, https:\/\/haimagazine.com\/wp-content\/uploads\/2025\/11\/Michal_Maciejewski-600x547.jpeg 600w\" sizes=\"auto, (max-width: 704px) 100vw, 704px\" \/><\/figure><\/div>\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:66.66%\"><p>AI can analyze data faster than any medical team. But does it really understand the patient well enough to fully trust its advice? How do we make algorithms work effectively while also considering patients&#8217; needs, data protection and legal issues? Dr. Micha\u0142 Maciejewski, Data &amp; Analytics Expert at Roche Polska, talks about how to shape responsible AI use in clinical practice.<\/p><\/div><\/div><p><strong>The development of artificial intelligence is moving at an unprecedented pace. How does this change the way we should look at innovations in medicine today? <\/strong><\/p><p>The development of artificial intelligence is making a real impact on the daily clinical practice. On one hand, this technology paves the way for faster diagnostics, more precise therapies and smoother communication. At the same time, it presents us with a critical problem: how do we ensure that these largely globally developed advanced tools are safe, ethical and genuinely effective in the specifics of local healthcare systems?<\/p><p>This issue is two-pronged: it involves both defining the scope of the solution and the need to adapt it to local populations. By organizing the first Healthcare Datathon in Poland in collaboration with Roche Polska, Gda\u0144sk University Clinical Center, Gda\u0144sk Medical University and MIT Critical Data, we were driven by the belief that the true measure of value of any innovation in medicine is its patient-centricity. The key to responsible AI implementation lies in a deep understanding of its capabilities and limitations. Additionally, AI implementations should operate in the background, supporting medical professionals in the diagnosis and treatment process, rather than being an extra duty in their daily work.<\/p><p><strong>From a patient&#8217;s perspective \u2013 which areas of healthcare could benefit the most from implementing AI?<\/strong><sup><\/sup><\/p><p>As a data engineer, I&#8217;d like to highlight the benefits of storing and processing medical data, starting with basic statistical analyses. UCK Gda\u0144sk is a pioneer in this area, using medical data to monitor patient well-being and predict health deteriorations in the near future. During the Covid-19 pandemic, Massachusetts General Hospital used a relatively simple model based on logical rules that could predict which symptoms were linked to a higher risk of severe disease progression. Despite the current hype around GenAI, artificial intelligence offers a broad spectrum of possibilities \u2014 these examples from Gda\u0144sk and Boston show that simple models provide useful heuristics for doctors.<\/p><p>For patients, the use of artificial intelligence in healthcare can translate into tangible benefits: shorter wait times for diagnoses, better tailored treatments and easier access to doctors. A global issue is the insufficient number of medical professionals relative to current needs, making every improvement through technology incredibly valuable. Importantly, studies show that patients often feel more comfortable interacting with digital tools than in face-to-face conversations, which encourages them to share information about their health status more freely. As highlighted in the article &#8220;Patients\u2019 perspectives on digital health tools&#8221; (2023), apps and digital assistants enhance the sense of control, support self-managed care and make it easier to monitor symptoms. Speakers cited a study on patient preferences indicating that they favor responses from LLMs over doctors, due to their greater empathy and understanding.<\/p><p>On the flip side, language models are known for generating convincing hallucinations and struggling to admit when they lack knowledge in a specific area. In a panel discussion, one doctor pointed out the growing concern and expectations among patients who self-diagnose using language models before their first oncology visit. Instead of focusing on explaining the treatment process, the doctor has to debunk the misinformation provided by LLMs. From the physicians&#8217; perspective, it&#8217;s important to note recent research published in the Lancet involving Polish doctors, which showed that the skills of doctors who use AI for cancer detection are worsening. This discovery falls into a broader context of AI&#8217;s impact on education, which was also discussed during the conference.<\/p><p><strong>People are talking increasingly more about the importance of local clinical data. Why might they be key to the success of AI tools in Poland?<\/strong><\/p><p>Global artificial intelligence models are impressive, but they often rely on data that doesn&#8217;t quite match the local characteristics of individual countries&#8217; populations. That&#8217;s why projects based on local patient data are so crucial. In Gda\u0144sk, the University Clinical Center (UCK) and the Medical University of Gda\u0144sk are developing the Interdisciplinary Pomeranian Center for Digital Medicine. This center integrates clinical, imaging and biological data to better tailor algorithms to everyday medical practice in Poland. Using local data sets aligns with the goals of the European Health Data Space (EHDS), as its decentralized architecture is built on national systems that aggregate data from local sources, enabling their safe cross-border use for treatment, research and innovation.<\/p><p>On the second day, we tackled our first major challenge: analyzing unplanned readmissions at UCK. We used a local, anonymized clinical dataset for the first time in Poland. The insights were quite specific \u2014 for instance, readmission peaks were identified in age groups around 30 and 65 years old. Moreover, our analyses revealed a connection between certain diseases and the likelihood of being readmitted. These findings will help craft practical recommendations for the hospital. Importantly, we noticed that more diverse teams, consisting of doctors, engineers and students, conducted deeper and more insightful analyses. We believe our experiences in multidimensional data anonymization and using a secure computational environment will set an example for similar initiatives in other medical institutions in the future.<\/p><p><strong>Language models are now passing medical specialty exams. How do you see their potential and limitations in clinical practice?<\/strong><sup><\/sup><\/p><p>Today we&#8217;re seeing the most spectacular development in the area of large language models (LLMs). The potential is undeniable \u2014 a Polish study from 2024 showed that GPT-4 passed as many as 222 out of 297 specialized exams. At the same time, we need to be aware of the limitations and it\u2019s crucial to see how these models perform in real clinical settings. A recent publication by MIT researchers revealed that messages containing typos and flowery descriptions resulted in lower-quality responses.<\/p><p>As part of the second challenge of the LLM-a-thon, we tackled evaluating language models in the context of multiple sclerosis. In collaboration with neurologists from the Adult Neurology Clinic at UCK, we created a benchmark including questions and answers about this disease. This study aimed to address the real issues that LLMs are already creating: providing inaccurate medical information, increasing anxiety among patients who self-diagnose with LLMs, and adding extra burden on doctors.<strong> <\/strong><\/p><p>Teams consisting of doctors, patients and experts have shown that even minor changes<strong> <\/strong>in the prompt can lead<strong> <\/strong>to incorrect responses from the models. This is a clear signal that despite high pass rates in tests, their use in clinical practice requires education on permitted uses and potential side effects. Understanding when and why models err and how to properly use them is crucial because they can convincingly hallucinate, as well as reinforce user beliefs by agreeing with even their wrong messages (sycophancy).<\/p><p><strong>Research into new drugs is expensive and carries a high risk of failure. How can artificial intelligence really help scientists and speed up breakthroughs in this field? <\/strong><sup><\/sup><\/p><p>Another crucial aspect is harnessing AI to enhance drug development. This includes initiatives like Lab in the Loop that integrate lab and clinical experiments with cutting-edge artificial intelligence achievements. A standout example is the AlphaFold model, which predicts protein structures with incredible precision and earned last year&#8217;s Nobel Prize in Chemistry. Using AI combined with carefully selected medical data speeds up and improves the identification of therapeutic targets, the design of new molecules and the prediction of patient responses to treatments. There&#8217;s also huge potential in the data area \u2014 historically, men from Western countries make up 70% of the clinical study population. More balanced populations, discoveries in biomarkers (including digital ones using mobile devices) and diagnostic methods are key to more balanced data sets and consequently AI models. Another dimension of Lab in the Loop is the automation of R&amp;D work in laboratories. At a Roche company focused on drug creation, this methodology is applied in areas like autoimmune, neurodegenerative, infectious diseases and oncology, boosting the chances of success in drug discovery and development.<\/p><p><strong>The rapid development of technology also demands regulations. How do you think the AI Act will affect the use of artificial intelligence in healthcare?<\/strong><sup><\/sup><\/p><p>It&#8217;s impossible to overlook the ethical and legal aspects of AI in medicine. According to the AI Act, AI systems that impact health, safety or fundamental rights can be considered high-risk. This means such solutions have to meet strict requirements including quality, safety, transparency and human oversight. The question of accountability becomes very real: who is responsible when an algorithm makes a mistake? The AI Act outlines responsibilities for both providers and users, including keeping records and monitoring performance after deployment. Clear regulations on responsibility and oversight are another cornerstone in building patient trust in technology.<\/p><p><strong>The recently concluded first Healthcare Datathon brought together doctors, patients, officials, engineers and scientists to tackle common challenges. What do such experiences reveal about the potential for interdisciplinary collaboration?<\/strong><sup><\/sup><\/p><p>In the age of AI, critical thinking is one of the key skills that helps separate the wheat from the chaff. Collaborating across disciplinary divisions holds great power and is crucial for developing critical thinking skills. When doctors, engineers, students and patients come together at the same table, the solutions that emerge genuinely meet the needs of the healthcare system. It&#8217;s also worth noting that the conference took place at the UCK hospital, right across from the operating block in Professor Kieturakis&#8217; hall; which to me is the Polish Princeton-Plainsboro.<\/p><p>The event was international and the participants represented leading global (MIT, Harvard University, Stanford Healthcare, Google Research, Emory University, University College Dublin) and national centers (UCK, GUMed, WUM, Gda\u0144sk University of Technology, Pozna\u0144 University of Technology, Warsaw University of Technology, \u0141\u00f3d\u017a University of Technology). The datathon proved that interdisciplinary teams not only inspire but are essential to tackle ethical and technical issues in AI. This experience confirms that partnerships between Roche and universities, hospitals and public institutions create an ecosystem where innovations move beyond being just concepts and become tangible tools that support doctors in their daily work and enhance patient experiences.<\/p><p><strong>To wrap up, what role can we expect Poland to play in the global AI ecosystem in medicine in the coming years?<\/strong><\/p><p>AI brings a huge potential for democratizing access to healthcare. Our vision is a responsible transformation of the system where the human aspect always remains central. It&#8217;s crucial to understand that AI won&#8217;t replace doctors, but should strengthen the patient-doctor relationship instead of disrupting it. It can enhance diagnostics and speed up therapeutic processes, making care more personalized and available to everyone.<\/p><p>Poland has the real potential to set the standards for ethical and responsible use of AI in medicine and to serve as a role model for other countries, proving that advanced models can be safely and effectively implemented, starting with a thorough local verification. As Dr. Leo Anthony Celi, a global organizer of Datathons, puts it: &#8220;Engaging diverse communities in AI development is the best defense we have against bias and prejudice in medicine.&#8221;<\/p>","protected":false},"excerpt":{"rendered":"<p>AI development is changing the way we diagnose, treat and conduct clinical research. How should we implement these solutions while prioritizing patients?<\/p>\n","protected":false},"author":354,"featured_media":15937,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"rank_math_lock_modified_date":false,"footnotes":""},"categories":[999],"tags":[],"popular":[],"difficulty-level":[38],"ppma_author":[776],"class_list":["post-16156","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-health-and-medicine","difficulty-level-medium"],"acf":[],"authors":[{"term_id":776,"user_id":354,"is_guest":0,"slug":"redakcja","display_name":"Redakcja","avatar_url":{"url":"https:\/\/haimagazine.com\/wp-content\/uploads\/2025\/07\/Zrzut-ekranu-2025-07-10-o-16.00.36.png","url2x":"https:\/\/haimagazine.com\/wp-content\/uploads\/2025\/07\/Zrzut-ekranu-2025-07-10-o-16.00.36.png"},"first_name":"","last_name":"","user_url":"","job_title":"","description":""}],"_links":{"self":[{"href":"https:\/\/haimagazine.com\/en\/wp-json\/wp\/v2\/posts\/16156","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/haimagazine.com\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/haimagazine.com\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/haimagazine.com\/en\/wp-json\/wp\/v2\/users\/354"}],"replies":[{"embeddable":true,"href":"https:\/\/haimagazine.com\/en\/wp-json\/wp\/v2\/comments?post=16156"}],"version-history":[{"count":1,"href":"https:\/\/haimagazine.com\/en\/wp-json\/wp\/v2\/posts\/16156\/revisions"}],"predecessor-version":[{"id":16157,"href":"https:\/\/haimagazine.com\/en\/wp-json\/wp\/v2\/posts\/16156\/revisions\/16157"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/haimagazine.com\/en\/wp-json\/wp\/v2\/media\/15937"}],"wp:attachment":[{"href":"https:\/\/haimagazine.com\/en\/wp-json\/wp\/v2\/media?parent=16156"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/haimagazine.com\/en\/wp-json\/wp\/v2\/categories?post=16156"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/haimagazine.com\/en\/wp-json\/wp\/v2\/tags?post=16156"},{"taxonomy":"popular","embeddable":true,"href":"https:\/\/haimagazine.com\/en\/wp-json\/wp\/v2\/popular?post=16156"},{"taxonomy":"difficulty-level","embeddable":true,"href":"https:\/\/haimagazine.com\/en\/wp-json\/wp\/v2\/difficulty-level?post=16156"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/haimagazine.com\/en\/wp-json\/wp\/v2\/ppma_author?post=16156"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}