Overview:
This lecture will provide learners with an overview of the history of Al chatbots in medicine, knowledge about the latest development of ChatGPT and other large language models, as well as opportunities and challenges of using chatbots in laboratory medicine field. I will discuss the cutting-edge research that assess the accuracy and reliability of chatbots in answering questions related to laboratory tests or interpretation of laboratory data. I will use ChatGPT demo to show the functions and inherent limitations of chatbots, including issues such as “hallucinations”, and further elucidate the root causes of these limitations.
Speaker:
Dr. Sarina Yang is an Associate Professor in the Department of Pathology and Laboratory Medicine, Medical Director of Clinical Chemistry and Toxicology Laboratory, and Co-Director of the ComACC accredited Clinical Chemistry Fellowship Program at NewYork-Presbyterian Hospital/Weill Cornell Medicine. She is board certified in both Clinical Chemistry and Toxicological Chemistry by ABCC. She currently serves as the Chair of Artificial Intelligence Working Group in the International Federation of Clinical Chemistry and Laboratory Medicine (IFCC), and is a member in the ADLM Academy Council, Education Core Committee, and Data Analytics Steering Committee. She serves as an Associate Editor of Clinical Chemistry Journal, and in the Editorial Board of Critical Reviews in Clinical Laboratory Sciences and Annals of Laboratory Medicine. Her clinical and research interests are artificial intelligence, machine learning, clinical mass spectrometry, and toxicology/TOM.
Learning Objectives:
At the end of this session, participants will be able to:
1) Discuss the capabilities and potential applications of chatbots in the field of laboratory medicine.
2) Describe research and studies that evaluate the accuracy and reliability of chatbots in response to laboratory medicine questions.
3) Explain the root causes underlying chatbots’ limitation, such as misinformation, inconsistencies, and lack of human-like reasoning abilities.