Patient autonomy in the context of AI personalization technologies: new approaches and ethical challenges
DOI:
https://doi.org/10.21146/2413-9084-2025-30-1-21-33Keywords:
artificial intelligence, autonomy, large language models, predictor, attitudes, beliefs, subjectivityAbstract
This article explores the ethical challenges related to using large language models in medical decision-making for incapacitated patients. It highlights how advancements in artificial intelligence (AI) raise questions about the application of the principle of patient autonomy in healthcare. AI systems, which analyze behavioral patterns from big data and simulate personal traits, can be used to support decision-making. However, these models cannot fully capture an individual’s unique values and personality, raising concerns about whether the principle of autonomy is truly upheld. One solution could be the development of a personalized patient preference predictor (P4) – a large language model trained on personal data to reflect the individual’s preferences. Yet, the creation of 4P requires careful consideration of the data used for training. A key challenge is the potential inconsistency in the individual’s beliefs and attitudes across different contexts, such as written communications, social media, and verbal statements. Social psychology research suggests that psychometric data could be used to better model attitudes in training AI. This approach could pave the way for AI-enhanced autonomy and spark discussions about the role of personality in autonomous decision-making.