HomeMedical SpecialtiesSafe MedicationArtificial intelligence and patient care: Considerations of medication safety

Artificial intelligence and patient care: Considerations of medication safety

Published on

Artificial Intelligence (AI) has generated significant excitement globally. One notable AI system is ChatGPT (https://chatgpt.com/), which has become widely used for information seeking and ideas generation, etc. It has significantly simplified the lives of many by providing rapid access to information and various resources, influencing not only individuals, but also the health care system. However, convenience and efficiency may also come with the potential of uncertainty regarding the accuracy of the information generated by AI.

AI in Clinical Practice: Context Matters

The increasing utilization and accessibility of AI presents potential risks to patients and clinical practice. Online tools, such as symptom checkers, have been publicly available. Users may share experiences where these tools are “trained” to generate a list of possible diagnoses based on the symptoms presented or entered by the patient. A user presenting with stomach pain may end up with diagnoses ranging from constipation to colon cancer. With such a wide range of “possible diagnosis” outputs, these tools often lack the nuanced clinical context that healthcare providers would consider in their patient assessments. It is possible that patients might self-diagnose and self-treat, potentially worsening their condition or exposing themselves to the risks of side effects due to inappropriate medication use. In addition, with increasing access to online drug-drug interaction checkers, patients may identify a potential interaction, self-assess and decide to stop taking their medication(s) without consulting their healthcare providers, thereby compromising their health outcomes.

The advancement of technology facilitates information access – information seeking (by patients and healthcare practitioners) and information generating (e.g., by various online tools and AI systems). However, the subsequent use and application of the information obtained would require clinical reasoning and critical appraisal skills in addition to the consideration of clinical contexts that are unique to individual patients. In health profession education, students and trainees go through clinical training to develop their critical thinking skills and apply their clinical knowledge to diverse patient scenarios. No single patient is alike. It is the role and responsibility of a healthcare professional to contextualize patient health information and make appropriate recommendations accordingly.

AI in Clinical Practice: Taking Caution

Despite undergoing rigorous training, missing essential clinical contexts is still possible among healthcare providers. The introduction of prescribing algorithms can assist healthcare providers in performing patient assessments to some extent, but they should not be solely relied upon. This caution arises from the recognition that a patient’s health is influenced by a complex array of factors, particularly, the social determinants of health. It is important to acknowledge the potential biases in these AI generated algorithms based on their development process, the developers involved, the specific patient populations studied and used, etc. Clinical reasoning may be lacking in these algorithms. Clinical contexts and social determinants of health may not be fully considered either. The need for critical thinking is crucial particularly when considering red-flag symptoms to make differential diagnoses, for instance. Should red-flag symptoms within the context of a patient’s characteristics be missed, it can have a profoundly negative impact on their health.

Over-reliance on AI tools may have adverse effects on the health outcomes of marginalized populations, as these algorithms might not account for the unique circumstances of individuals in such groups. Patients may often fall into a grey area not precisely covered by the AI generated algorithm’s criteria. In such cases, healthcare providers must exercise their clinical judgment to make the best possible recommendations for patients, underscoring the importance of clinicians possessing a thorough understanding and clinical reasoning of the conditions they are assessing.

Final Remarks: AI is Here to Stay

The use of AI should be embraced responsibly, both by healthcare providers and by patients. Healthcare professionals have greater accountability than ever to contextualize and discern information as AI technology becomes smarter and simpler to use for patients and clinicians. Doing so will help mitigate potential biases in AI tools that may impact patient health outcomes.

Recently, a 2024 Journal of the American Pharmacists Association (JAPhA) publication (https://www.japha.org/article/S1544-3191(24)00139-0/abstract) identified that medication-related patient education materials generated by ChatGPT presented variable accuracy of information. Therefore, with respect to medication safety, patient should always consider the following five questions to ask about their medications (https://www.ismp-canada.org/medrec/5questions.htm) during a clinical encounter with their healthcare
providers:

• Are there any changes in my medications?

• What medications do I need to keep taking?

• How should I use this medication correctly?

• What should be monitored while taking this medication?

• Is there any follow-up required after starting this medication? 

The use of AI to answer medical/medication related questions may seem like a reasonable approach; however, be mindful that patients’ health is influenced by a complex set of factors that AI is not well equipped enough to answer for the time being.

By Samir Kanji, Neil Patel, and Certina Ho

Samir Kanji is a PharmD Student at the Leslie Dan Faculty of Pharmacy, University of Toronto; Neil Patel is a 2023 PharmD graduate at the Leslie Dan Faculty of Pharmacy, University of Toronto; and Certina Ho is an Assistant Professor at the Department of Psychiatry and Leslie Dan Faculty of Pharmacy, University of Toronto.

Latest articles

Transforming Acute Care with Near-Patient Testing: A Game Changer in the Fight Against AMR

These past few years, we have seen remarkable technological advances, particularly in healthcare with...

AI tools to ease administrative burdens in ER

Unity Health Toronto has received $200 thousand in funding from Toronto Innovation Acceleration Partners...

RVH’s emergency department Minor Ailment Patient Pathway

Royal Victoria Regional Health Centre’s (RVH) Emergency Department (ED) is well prepared to serve...

Early Talent, Big Impact: Powering the Future of Healthcare Technology Innovation

Rapid innovations in technology, particularly AI, have many worried that machines will threaten their...

More like this

UHN researchers deploy VR in search for enhanced care solutions

Imagine hearing your father singing the tunes of Jacques Brel, one of the most...

World first discoveries allow researchers to accurately diagnose prenatal exposure syndromes and birth disorders

Researchers at London Health Sciences Centre (LHSC) and Lawson Health Research Institute are using...

Using AI to predict tumour response

For patients with metastatic cancer, individual tumours have different sensitivities to cancer therapies. A...

Brain-computer interface gives gift of freedom

Using your mind to control machinery sounds like sci-fi to some, but for Tristin...

Behind the research: How a next-generation helmet could revolutionize focused ultrasound

In 2015, Sunnybrook Research Institute (SRI) scientists and clinicians performed a world-first: They successfully...

Improving care for people living with spinal cord injury

When Eduardo Jimenez, 52, comes into contact with the healthcare system, he often encounters...