Patient autonomy in the context of AI personalization technologies: new approaches and ethical challenges

Authors

  • Sofya V. Lavrentyeva RAS Institute of Philosophy

DOI:

https://doi.org/10.21146/2413-9084-2025-30-1-21-33

Keywords:

artificial intelligence, autonomy, large language models, predictor, attitudes, be­liefs, subjectivity

Abstract

This article explores the ethical challenges related to using large language models in medical decision-making for incapacitated patients. It highlights how advancements in artificial in­telligence (AI) raise questions about the application of the principle of patient autonomy in healthcare. AI systems, which analyze behavioral patterns from big data and simulate per­sonal traits, can be used to support decision-making. However, these models cannot fully capture an individual’s unique values and personality, raising concerns about whether the principle of autonomy is truly upheld. One solution could be the development of a per­sonalized patient preference predictor (P4) – a large language model trained on personal data to reflect the individual’s preferences. Yet, the creation of 4P requires careful considera­tion of the data used for training. A key challenge is the potential inconsistency in the indi­vidual’s beliefs and attitudes across different contexts, such as written communications, so­cial media, and verbal statements. Social psychology research suggests that psychometric data could be used to better model attitudes in training AI. This approach could pave the way for AI-enhanced autonomy and spark discussions about the role of personality in autonomous decision-making.

Downloads

Published

2025-06-30

Issue

Section

Human sciences