Article Data

  • Views 214
  • Dowloads 150

Original Research

Open Access

Artificial intelligence applications in tooth avulsion: comparative accuracy of ChatGPT and DeepSeek

  • Gizem Karagöz Doğan1,*,
  • Yelda Polat Yavuz2
  • İzzet Yavuz2

1Department of Pediatric Dentistry, Faculty of Dentistry, Iğdır University, 76000 Iğdır, Turkey

2Department of Pediatric Dentistry, Faculty of Dentistry, Dicle University, 21000 Diyarbakır, Turkey

DOI: 10.22514/jocpd.2026.049 Vol.50,Issue 2,March 2026 pp.199-208

Submitted: 10 September 2025 Accepted: 31 October 2025

Published: 03 March 2026

*Corresponding Author(s): Gizem Karagöz Doğan E-mail: gizem.dogan@igdir.edu.tr

Abstract

Background: The accuracy and performance of artificial intelligence (AI)-based chatbots in clinical applications can directly influence healthcare outcomes. In cases of dental trauma, adherence to the International Association of Dental Traumatology (IADT) guidelines is essential for clinical success. Although the use of AI in healthcare is increasing, few studies have evaluated the ability of chatbots to provide accurate information in dental trauma. This study aimed to evaluate and compare the performance of the ChatGPT and DeepSeek platforms in providing guideline-based information on the management of dental avulsion, using the IADT guidelines as a reference standard. Methods: Based on the IADT guidelines, 25 questions (12 yes/no and 13 open-ended) were posed to ChatGPT-3.5 and DeepSeek over the course of one week. Two independent researchers asked each question three times daily. Responses were classified as correct, incorrect, or insufficient according to the guidelines. Statistical analyses were conducted to assess agreement and accuracy. Results: A total of 1050 responses were analyzed. DeepSeek demonstrated moderate agreement with the guideline-based answers (κ ≈ 0.52; 95% confidence interval (CI): 0.48–0.55; p < 0.001), whereas ChatGPT showed weak-to-moderate agreement (κ ≈ 0.44; 95% CI: 0.40–0.48; p < 0.001). The mean accuracy difference between the two platforms was approximately 7% (p = 0.001). Conclusions: ChatGPT and DeepSeek have potential as knowledge resources for healthcare applications. However, their accuracy and consistency in addressing dental avulsion-related questions remain limited. Clinicians should consider these systems as complementary tools that support, but do not replace, clinical expertise and decision-making. Further research should explore AI models specifically trained in dental trauma to determine their clinical utility.


Keywords

Artificial intelligence; Chatbots; ChatGPT; Tooth avulsion; DeepSeek; Large language models


Cite and Share

Gizem Karagöz Doğan,Yelda Polat Yavuz,İzzet Yavuz. Artificial intelligence applications in tooth avulsion: comparative accuracy of ChatGPT and DeepSeek. Journal of Clinical Pediatric Dentistry. 2026. 50(2);199-208.

References

[1] Cascella M, Montomoli J, Bellini V, Bignami E. Evaluating the feasibility of ChatGPT in healthcare: an analysis of multiple clinical and research scenarios. Journal of Medical Systems. 2023; 47: 1–5.

[2] Ekmekçi E, Durmazpinar PM. Evaluation of different artificial intelligence applications in responding to regenerative endodontic procedures. BMC Oral Health. 2025; 25: 1–7.

[3] Dubin JA, Bains SS, Chen Z, Hameed D, Nace J, Mont MA, et al. Using a Google web search analysis to assess the utility of ChatGPT in total joint arthroplasty. The Journal of Arthroplasty. 2023; 38: 1195–1202.

[4] Hayder W, Hayder WA. Highlighting DeepSeek-R1: architecture, features and future implications. International Journal of Computer Science and Mobile Computing. 2025; 14: 1–13.

[5] Ulusoy AT, Önder H, Çetin B, Kaya Ş. Knowledge of medical hospital emergency physicians about the first-aid management of traumatic tooth avulsion. International Journal of Paediatric Dentistry. 2012; 22: 211–216.

[6] Ozden I, Gokyar M, Ozden ME, Sazak Ovecoglu H. Assessment of artificial intelligence applications in responding to dental trauma. Dental Traumatology. 2024; 40: 722–729.

[7] Ghaderi F, Adl A, Ranjbar Z. Effect of a leaflet given to parents on knowledge of tooth avulsion. European Journal of Paediatric Dentistry. 2013; 14: 13–16.

[8] Al-Jundi SH. Knowledge of Jordanian mothers with regards to emergency management of dental trauma. Dental Traumatology. 2006; 22: 291–295.

[9] Al-Jame Q, Andersson L, Al-Asfour A. Kuwaiti parents’ knowledge of first-aid measures of avulsion and replantation of teeth. Medical Principles and Practice. 2007; 16: 274–279.

[10] Jain A, Kulkarni P, Kumar S, Jain M. Knowledge and attitude of parents towards avulsed permanent tooth of their children and its emergency management in Bhopal city. Journal of Clinical and Diagnostic Research. 2017; 11: ZC40.

[11] Ozer S, Yilmaz EI, Bayrak S, Tunc E Sen. Parental knowledge and attitudes regarding the emergency treatment of avulsed permanent teeth. European Journal of Dentistry. 2012; 6: 370–375.

[12] Loo TJ, Gurunathan D, Somasundaram S. Knowledge and attitude of parents with regard to avulsed permanent tooth of their children and their emergency management—Chennai. Journal of Indian Society of Pedodontics and Preventive Dentistry. 2014; 32: 97–110.

[13] Hu LW, Prisco CRD, Bombana AC. Knowledge of Brazilian general dentists and endodontists about the emergency management of dento-alveolar trauma. Dental Traumatology. 2006; 22: 113–117.

[14] Kostopoulou MN, Duggal MS. A study into dentists’ knowledge of the treatment of traumatic injuries to young permanent incisors. International Journal of Paediatric Dentistry. 2005; 15: 10–19.

[15] Santos MESMI, Habecost APZ, Gomes FV, Weber JBB, De Oliveira MG. Parent and caretaker knowledge about avulsion of permanent teeth. Dental Traumatology. 2009; 25: 203–208.

[16] Díaz-Flores García V, Freire Y, Tortosa M, Tejedor B, Estevez R, Suárez A. Google Gemini’s performance in endodontics: a study on answer precision and reliability. Applied Sciences. 2024; 14: 6390.

[17] Suárez A, Díaz-Flores García V, Algar J, Gómez Sánchez M, Llorente de Pedro M, Freire Y. Unveiling the ChatGPT phenomenon: evaluating the consistency and accuracy of endodontic question answers. International Endodontic Journal. 2024; 57: 108–113.

[18] Umer F, Habib S. Critical analysis of artificial intelligence in endodontics: a scoping review. Journal of Endodontics. 2022; 48: 152–160.

[19] Arqub SA, Al-Moghrabi D, Allareddy V, Upadhyay M, Vaid N, Yadav S. Content analysis of AI-generated (ChatGPT) responses concerning orthodontic clear aligners. The Angle Orthodontist. 2024; 94: 263–272.

[20] Buldur M, Sezer B. Can artificial intelligence effectively respond to frequently asked questions about fluoride usage and effects? A qualitative study on ChatGPT. Fluoride. 2023; 56: 201–216.

[21] Balel Y. Can ChatGPT be used in oral and maxillofacial surgery? Journal of Stomatology Oral and Maxillofacial Surgery. 2023; 124: 101471.

[22] Bayraktar Nahir C. Can ChatGPT be guide in pediatric dentistry? BMC Oral Health. 2025; 25: 1–8.

[23] Lahat A, Shachar E, Avidan B, Glicksberg B, Klang E. Evaluating the utility of a large language model in answering common patients’ gastrointestinal health-related questions: are we there yet? Diagnostics. 2023; 13: 1950.

[24] Schwendicke F, Samek W, Krois J. Artificial intelligence in dentistry: chances and challenges. Journal of Dental Research. 2020; 99: 769–774.

[25] Samaan JS, Yeo YH, Rajeev N, Hawley L, Abel S, Ng WH, et al. Assessing the accuracy of responses by the language model ChatGPT to questions regarding bariatric surgery. Obesity Surgery. 2023; 33: 1790–1796.


Submission Turnaround Time

Top