The paradox of trust in artificial intelligence and its rationale
DOI:
https://doi.org/10.21146/2413-9084-2023-28-1-34-47Keywords:
artificial intelligence, proxy culture, trust, algorithmic responsibility, AI opacityAbstract
The article is devoted to the analysis of the phenomenon of trust in digital intelligent systems that are actively involved in the transformation of social practices, decision-making processes and become “new” mediators of our existence. As AI technologies run into trouble, the main challenge is to analyze the paradox of trust in AI, which includes both trust in online systems, servers and software, as well as failures demonstrated by artificial intelligence. The purpose of the article is to study the paradox of trust in the AI system, taking into account the comparison of “pro & contra” arguments. This involves, firstly, identifying the essence and specifics of the proxy culture as a culture of trust. Secondly, consideration of AI opacity and algorithmic responsibility becomes important. The theoretical basis is the modern work of Russian and foreign researchers. The methodological strategy includes a comparative analysis and comparison of the socio-humanitarian understanding of trust and the specifics of trust in AI. The conclusions are based on the following statements. The AI trust paradox is accompanied by an obscure compromise, the essence of which is to ignore all the risks due to the usability of fast intelligent systems. Algorithmic responsibility, aimed at reducing the unreliability and threats of using AI, conflicts with the obligation provided by the programmed code. The priorities of technological evolution are to develop standards for algorithmic transparency that ensure the disclosure of information related to the consequences of algorithmic decisions.