Title
Author
DOI
Article Type
Special Issue
Volume
Issue
Bridging the information gap in pediatric dentistry: a comparison of ChatGPT-4o, Google Gemini Advanced, and expert responses based on evaluations by parents and pediatric dentists
1Department of Pediatric Dentistry, Afyonkarahisar Health Sciences University, 03030 Afyonkarahisar, Turkey
2Department of Orthodontics, Afyonkarahisar Health Sciences University, 03030 Afyonkarahisar, Turkey
DOI: 10.22514/jocpd.2026.014 Vol.50,Issue 1,January 2026 pp.147-155
Submitted: 08 April 2025 Accepted: 12 May 2025
Published: 03 January 2026
*Corresponding Author(s): İsmail Haktan Çelik E-mail: haktan.celik@afsu.edu.tr
Background: This study aimed to evaluate the accuracy and adequacy of responses provided by ChatGPT-4o and Google Gemini Advanced to common pediatric dentistry questions posed by parents, and to compare these responses with those given by pediatric dentistry experts. Methods: Fifty-seven questions were extracted from the “Frequently Asked Questions by Parents” section of the International Association of Paediatric Dentistry (IAPD) website. Based on a preliminary survey of 20 pediatric dentists, the 15 most frequently asked questions were selected. For each question, three responses (from experts, ChatGPT-4o and Google Gemini Advanced) were collected and assessed for readability using the Flesch-Kincaid test. The responses were then randomized and included in a survey completed by 47 pediatric dentists and 101 parents, who rated the adequacy of each answer on a scale from 1 (insufficient) to 10 (very sufficient). Results: Pediatric dentists consistently rated expert answers higher than Artificial Intelligence (AI)-generated responses, with significant differences observed for 13 out of 15 questions (p < 0.05). In contrast, parents showed varying levels of satisfaction, with no significant differences found in their ratings for eight of the questions. In some instances, AI-generated answers were rated comparably or even higher than expert responses by parents. Conclusions: The alignment between expert opinions and AI-generated responses remained inconsistent. While pediatric dentists generally found expert answers more satisfactory, parents occasionally preferred chatbot-generated answers depending on the question. These findings suggested that AI-powered chatbots hold promise for the future of patient education in pediatric dentistry, though expert oversight remains essential.
Pediatric dentistry; Orthodontics; Artifical intelligence; ChatGPT-4o; Google Gemini Advanced
İsmail Haktan Çelik,Hasan Camcı,Farhad Salmanpour. Bridging the information gap in pediatric dentistry: a comparison of ChatGPT-4o, Google Gemini Advanced, and expert responses based on evaluations by parents and pediatric dentists. Journal of Clinical Pediatric Dentistry. 2026. 50(1);147-155.
[1] Graham S, Depp C, Lee EE, Nebeker C, Tu X, Kim HC, et al. Artificial intelligence for mental health and mental illnesses: an overview. Current Psychiatry Reports. 2019; 21: 1–18.
[2] Kılınç DD, Mansız D. Examination of the reliability and readability of Chatbot Generative Pretrained Transformer’s (ChatGPT) responses to questions about orthodontics and the evolution of these responses in an updated version. American Journal of Orthodontics and Dentofacial Orthopedics. 2024; 165: 546–555.
[3] Wang Q, Erqsous M, Barner KE, Mauriello ML. LATA: A pilot study on LLM-assisted thematic analysis of online social network data generation experiences. Proceedings of the ACM on Human-Computer Interaction. 2025; 9: 1–28.
[4] Fergus S, Botha M, Ostovar M. Evaluating academic answers generated using ChatGPT. Journal of Chemical Education. 2023; 100: 1672–1675.
[5] Giannakopoulos K, Kavadella A, Salim AA, Stamatopoulos V, Kaklamanos EG. Evaluation of the performance of generative AI large language models ChatGPT, Google Bard, and Microsoft Bing Chat in Supporting Evidence-Based Dentistry: comparative mixed methods study. Journal of Medical Internet Research. 2023; 25: e51580.
[6] Daraqel B, Wafaie K, Mohammed H, Cao L, Mheissen S, Liu Y, et al. The performance of artificial intelligence models in generating responses to general orthodontic questions: ChatGPT vs. Google Bard. American Journal of Orthodontics and Dentofacial Orthopedics. 2024; 165: 652–662.
[7] Albalawi F, Khanagar SB, Iyer K, Alhazmi N, Alayyash A, Alhazmi AS, et al. Evaluating the performance of artificial intelligence-based large language models in orthodontics—a systematic review and meta-analysis. Applied Sciences. 2025; 15: 893.
[8] Masalkhi M, Ong J, Waisberg E, Lee AG. Google DeepMind’s Gemini AI versus ChatGPT: a comparative analysis in ophthalmology. Eye. 2024; 38: 1412–1417.
[9] Chang Y, Wang X, Wang J, Wu Y, Yang L, Zhu K, et al. A survey on evaluation of large language models. ACM Trans Intell Syst Technol. 2024; 15: 1–45.
[10] Shukla M, Goyal I, Gupta B, Sharma J. A comparative study of ChatGPT, Gemini, and Perplexity. International Journal of Innovative Research in Computer Science & Technology. 2024; 12: 10–15.
[11] Chang WJ, Chang PC, Chang YH. The gamification and development of a chatbot to promote oral self-care by adopting behavior change wheel for Taiwanese children. Digital Health. 2024; 10: 20552076241256750.
[12] Rokhshad R, Zhang P, Mohammad-Rahimi H, Pitchika V, Entezari N, Schwendicke F. Accuracy and consistency of chatbots versus clinicians for answering pediatric dentistry questions: a pilot study. Journal of Dentistry. 2024; 144: 104938.
[13] Jung YS, Chae YK, Kim MS, Lee HS, Choi SC, Nam OH. Evaluating the accuracy of artificial intelligence-based chatbots on pediatric dentistry questions in the Korean National Dental Board Exam. Journal of the Korean Academy of Pediatric Dentistry. 2024; 51: 299–309
[14] Association WM. World Medical Association Declaration of Helsinki: ethical principles for medical research involving human subjects. JAMA. 2013; 310: 2191–2194.
[15] Madathil KC, Rivera-Rodriguez AJ, Greenstein JS, Gramopadhye AK. Healthcare information on YouTube: a systematic review. Health Informatics Journal. 2015; 21: 173–194.
[16] Oey CG, Livas C. The informative value and design of orthodontic practice websites in The Netherlands. Progress in Orthodontics. 2020; 21: 1–7.
[17] Kılınç DD, Sayar G. Assessment of reliability of YouTube videos on orthodontics. Turkish Journal of Orthodontics. 2019; 32: 145.
[18] Goodman RS, Patrinely JR, Stone CA Jr, Zimmerman E, Donald RR, Chang SS, et al. Accuracy and reliability of chatbot responses to physician questions. JAMA Network. 2023; 6: e2336483.
[19] Lim B, Seth I, Cuomo R, Kenney PS, Ross RJ, Sofiadellis F, et al. Can AI answer my questions? Utilizing artificial intelligence in the perioperative assessment for abdominoplasty patients. Aesthetic Plastic Surgery. 2024; 48: 4712–4724.
[20] Huo B, Calabrese E, Sylla P, Kumar S, Ignacio RC, Oviedo R, et al. The performance of artificial intelligence large language model-linked chatbots in surgical decision-making for gastroesophageal reflux disease. Surgical Endoscopy. 2024; 38: 2320–2330.
[21] Guven Y, Ozdemir OT, Kavan MY. Performance of artificial intelligence chatbots in responding to patient queries related to traumatic dental injuries: a comparative study. Dental Traumatology. 2024; 41: 338–347.
[22] Slavych BK, Atcherson SR, Zraick R. Using ChatGPT to improve health communication and plain language writing for students in communication sciences and disorders. Perspectives of the ASHA Special Interest Groups. 2024; 9: 599–612.
[23] Sismanoglu S, Capan BS. Performance of artificial intelligence on Turkish dental specialization exam: can ChatGPT-4.0 and Gemini advanced achieve comparable results to humans? BMC Medical Education. 2025; 25: 1–10.
[24] Omar M, Nassar S, Hijazi K, Glicksberg BS, Nadkarni GN, Klang E. Generating credible referenced medical research: a comparative study of OpenAI’s GPT-4 and Google’s Gemini. Computers in Biology and Medicine. 2025; 185: 109545.
[25] Kalluri K, Kokala A. Performance benchmarking of generative AI models: Chatgpt-4 vs. Google Gemini AI. International Research Journal of Modernization in Engineering Technology and Science. 2024; 6: 4673–4677.
[26] Schmidt J, Lichy I, Kurz T, Peters R, Hofbauer S, Plage H, et al. ChatGPT as a support tool for informed consent and preoperative patient education prior to penile prosthesis implantation. Journal of Clinical Medicine. 2024; 13: 7482.
[27] Reyhan AH, Mutaf Ç, Uzun İ, Yüksekyayla F. A performance evaluation of large language models in keratoconus: a comparative study of ChatGPT-3.5, ChatGPT-4.0, Gemini, Copilot, Chatsonic, and Perplexity. Journal of Clinical Medicine. 2024; 13: 6512.
[28] Bilgin Avsar D, Ertan AA. Comparative evaluation of ChatGPT-3.5 and Gemini in answering prosthodontics questions from the dental specialty exam: a cross-sectional study. Turkiye Klinikleri Journal of Dental Sciences. 2024; 30:668–673.
[29] Pupong K, Hunsrisakhun J, Pithpornchaiyakul S, Naorungroj S. Development of chatbot-based oral health care for young children and evaluation of its effectiveness, usability, and acceptability: mixed methods study. JMIR Pediatrics and Parenting. 2025; 8: e62738.
[30] Flesch R. A new readability yardstick. Journal of Applied Psychology. 1948; 32: 221–233.
[31] Bayraktar Nahir C. Can ChatGPT be a guide in pediatric dentistry? BMC Oral Health. 2025; 25: 1–8.
Science Citation Index Expanded (SciSearch) Created as SCI in 1964, Science Citation Index Expanded now indexes over 9,500 of the world’s most impactful journals across 178 scientific disciplines. More than 53 million records and 1.18 billion cited references date back from 1900 to present.
Biological Abstracts Easily discover critical journal coverage of the life sciences with Biological Abstracts, produced by the Web of Science Group, with topics ranging from botany to microbiology to pharmacology. Including BIOSIS indexing and MeSH terms, specialized indexing in Biological Abstracts helps you to discover more accurate, context-sensitive results.
Google Scholar Google Scholar is a freely accessible web search engine that indexes the full text or metadata of scholarly literature across an array of publishing formats and disciplines.
JournalSeek Genamics JournalSeek is the largest completely categorized database of freely available journal information available on the internet. The database presently contains 39226 titles. Journal information includes the description (aims and scope), journal abbreviation, journal homepage link, subject category and ISSN.
Current Contents - Clinical Medicine Current Contents - Clinical Medicine provides easy access to complete tables of contents, abstracts, bibliographic information and all other significant items in recently published issues from over 1,000 leading journals in clinical medicine.
BIOSIS Previews BIOSIS Previews is an English-language, bibliographic database service, with abstracts and citation indexing. It is part of Clarivate Analytics Web of Science suite. BIOSIS Previews indexes data from 1926 to the present.
Journal Citation Reports/Science Edition Journal Citation Reports/Science Edition aims to evaluate a journal’s value from multiple perspectives including the journal impact factor, descriptive data about a journal’s open access content as well as contributing authors, and provide readers a transparent and publisher-neutral data & statistics information about the journal.
Scopus: CiteScore 2.3 (2024) Scopus is Elsevier's abstract and citation database launched in 2004. Scopus covers nearly 36,377 titles (22,794 active titles and 13,583 Inactive titles) from approximately 11,678 publishers, of which 34,346 are peer-reviewed journals in top-level subject fields: life sciences, social sciences, physical sciences and health sciences.
Top