Patient Victimhood and the Risks of Using Artificial Intelligence Technology in Healthcare

Keywords: artificial intelligence in healthcare, digital medicine, digital crime, machine learning, modern challenges, ethics


Artificial intelligence technologies are of increasing interest in the field of medicine and are one of the key areas for the digital transformation of healthcare. According to a number of experts, medical professionals and digital technology developers, the use of medical devices equipped with artificial intelligence technologies will raise healthcare to a high level, which will lead to improved clinical decision-making, high-quality analysis of digital images, prediction and control of the correctness of the prescribed treatment.
However, failures caused by the use of medical devices equipped with artificial intelligence systems can have serious consequences for both clinical outcomes and patients. These consequences could undermine public confidence in artificial intelligence technologies and health care institutions in general. Given the certain novelty of technological solutions, data on the clinical efficacy and safety of products equipped with artificial intelligence are currently considered insufficient.
This publication raises two important questions. In the first part, it describes the main physical, social and mental characteristics (properties) of patients that increase the likelihood that they will be in the role of a victim in the event of a crime situation in the provision of innovative medical services. In the second part of the study, the risks of using artificial intelligence technologies in healthcare are identified, which cause the greatest concern for both patients and those who use innovative technologies.


Download data is not yet available.

Author Biography

А.А. Шутова, Kazan Innovative University named V.G. Timiryasov (IEML), Kazan, Russian

A.A. Shutova, Candidate of Legal Sciences, Senior Researcher, Research Institute of Digital Technologies and Law, Associate Professor, Department of Criminal Law and Process


khan B., Fatima H., Qureshi A., Kumar S., Hanan A., Hussain J. & Abdullah S. Drawbacks of Artificial Intelligence and Their Potential Solutions in the Healthcare Sector. Biomedical Materials & Devices. 2023;1:731–738. DOI:

Rajpurkar, P., Chen, E., Banerjee, O. &Topol, E. J. AI in health and medicine. Nat. Med. 2022. 28(1):31-38. DOI:

Singh RP, Hom GL, Abramoff MD, Campbell JP & Chiang MF. Current challenges and barriers to real-world artificial intelligence adoption for the healthcare system, provider, and the patient. Transl. Vis. Sci. Technol. 2020;9(2):45. DOI:

Kaplan A, Haenlein M. Siri, Siri, in my hand: Who’s the fairest in the land? On the interpretations, illustrations, and implications of artificial intelligence. Bus Horiz. 2019;62(1):15–25. DOI:

Dreyer K, Allen B. Artificial intelligence in health care: brave new world or golden opportunity? J Am CollRadiol. 2018;15(4):655–657. DOI:

Asan O, Bayrak AE, Choudhury A. Artificial Intelligence and Human Trust in Healthcare: Focus on Clinicians. J Med Internet Res. 2020;22(6):e15154. DOI:

Esmaeilzadeh P, Mirzaei T, Dharanikota S. Patients' Perceptions Toward Human-Artificial Intelligence Interaction in Health Care: Experimental Study. J Med Internet Res. 2021;23(11):e25856. DOI:

Khazizulin VB. Patient as an element of the mechanism for providing medical care. Viktimologiya [Victimology]. 2023;10(1):107–114. (In Russ.). DOI:

Vlasenkova AO, Guznova AD, Prokin AA. Trends in the development of mobile applications. E-Scio. 2023;(5):388-396. (In Russ.).

Hatherley JJ. Limits of trust in medical AI. J Med Ethics. 2020;46(7):478-481. DOI:

Mullakhmetova NE. Victimological aspects of iatrogenic crimes. Viktimologiya [Victimology]. 2022;9(2):157-165. (In Russ.). DOI:

Quinn TP, Senadeera M, Jacobs S, Coghlan S, Le V. Trust and medical AI: the challenges we face and the expertise needed to overcome them. J Am Med Inform Assoc. 2021;28(4):890-894. DOI:

Lemogne C. L'empathieenmédecine, nécessairemais non dénuée de risques [EMPATHY IN MEDICINE, NECESSARY BUT NOT FREE FROM RISKS]. Rev Prat. 2015;65(8):1027-1030. (In French.). PMID: 26749697.

Calnan M, Rowe R. Researching trust relations in health care: conceptual and methodological challenges–introduction. J Health Organ Manag. 2006;20:349–358. DOI:

Dmitrieva EV. Patient - Health Care Provider Communication in the Digital Era. Communicology. 2020;8(3):150-162. (In Russ.). DOI:

Parry MW, Markowitz JS, Nordberg CM. et al. Patient Perspectives on Artificial Intelligence in Healthcare Decision Making: A Multi-Center Comparative Study. JOIO. 2023;57:653–665. DOI:

Gusev AV, Astapenko EM, Ivanov IV, Zarubina TV, Kobrinsky BA. Principles of building trust in artificial intelligence systems for healthcare. Vestnik Roszdravnadzora [Bulletin of Roszdravnadzor]. 2022;(2):25-33. (In Russ.).

Kharitonova YuS. Legal means of ensuring the transparency principle of artificial intelligence. Journal of Digital Technologies and Law. 2023;1(2):337–358. (In Russ.). DOI:

Kleinberg G, Diaz MJ, Batchu S, Lucke-Wold B. Racial underrepresentation in dermatological datasets leads to biased machine learning models and inequitable healthcare. J Biomed Res (Middlet). 2022;3(1):42-47. PMID: 36619609; PMCID: PMC9815490.

Basu K, Sinha R, Ong A, Basu T. Artificial Intelligence: How is It Changing Medical Sciences and Its Future? Indian J Dermatol. 2020;65(5):365-370. DOI:

Gallese-Nobile K. Legal aspects of the use of artificial intelligence in telemedicine. Journal of Digital Technologies and Law. 2023;1(2):314–336. (In Russ.). DOI:

Hallowell, N., Badger, S., Sauerbrei, A. et al. “I don’t think people are ready to trust these algorithms at face value”: trust and the use of machine learning algorithms in the diagnosis of rare disease. BMC MedEthics. 2022;23:Art.112. DOI:

How to Cite
Шутова, А. (2023). Patient Victimhood and the Risks of Using Artificial Intelligence Technology in Healthcare. VICTIMOLOGY [VIKTIMOLOGIA], 10(4), 492-502.
Original articles