Next Article in Journal
Harnessing Graph Neural Networks to Predict International Trade Flows
Previous Article in Journal
Integrating OLAP with NoSQL Databases in Big Data Environments: Systematic Mapping
Previous Article in Special Issue
Machine Learning Approaches for Predicting Risk of Cardiometabolic Disease among University Students
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Advancing Dental Diagnostics: A Review of Artificial Intelligence Applications and Challenges in Dentistry

1
Department of Computer Science, College of Computer Science and Information Technology, Imam Abdulrahman Bin Faisal University, P.O. Box 1982, Dammam 31441, Saudi Arabia
2
Department of Computer Information Systems, College of Computer Science and Information Technology, Imam Abdulrahman Bin Faisal University, P.O. Box 1982, Dammam 31441, Saudi Arabia
*
Author to whom correspondence should be addressed.
Big Data Cogn. Comput. 2024, 8(6), 66; https://doi.org/10.3390/bdcc8060066
Submission received: 16 April 2024 / Revised: 3 June 2024 / Accepted: 3 June 2024 / Published: 7 June 2024

Abstract

:
The rise of artificial intelligence has created and facilitated numerous everyday tasks in a variety of industries, including dentistry. Dentists have utilized X-rays for diagnosing patients’ ailments for many years. However, the procedure is typically performed manually, which can be challenging and time-consuming for non-specialized specialists and carries a significant risk of error. As a result, researchers have turned to machine and deep learning modeling approaches to precisely identify dental disorders using X-ray pictures. This review is motivated by the need to address these challenges and to explore the potential of AI to enhance diagnostic accuracy, efficiency, and reliability in dental practice. Although artificial intelligence is frequently employed in dentistry, the approaches’ outcomes are still influenced by aspects such as dataset availability and quantity, chapter balance, and data interpretation capability. Consequently, it is critical to work with the research community to address these issues in order to identify the most effective approaches for use in ongoing investigations. This article, which is based on a literature review, provides a concise summary of the diagnosis process using X-ray imaging systems, offers a thorough understanding of the difficulties that dental researchers face, and presents an amalgamative evaluation of the performances and methodologies assessed using publicly available benchmarks.

1. Introduction

Artificial intelligence (AI) represents a significant technological advancement, enabling not only robots, but also various AI systems, to emulate intelligent human behavior. In therapeutic contexts, the concept of enhanced intelligence extends AI’s application within the medical profession, aiming to improve accuracy and efficacy in dental diagnostics. Augmented intelligence emphasizes AI’s supportive and complementary role alongside medical specialists, leveraging its extensive data-processing skills to assess results effectively [1].
X-ray exams aid in dental diagnosis by allowing oral professionals to access interior tooth structures that may not be apparent with an eye by lessening the visual diagnostic load. Various dental X-rays have completely changed dentistry by giving precise insights into the architecture of the mouth and teeth. With the use of these imaging technologies, dentists may detect gum disease, tooth decay, and other disorders early on, avoiding more problems and improving treatment results. Moreover, dental X-rays might reveal hidden problems that may be challenging to find on a clinical examination, such as impacted teeth [2].
As artificial intelligence is now widely used in dentistry, it has become a valuable tool for both detecting and forecasting the course of illness. It is certain that artificial intelligence and information systems will accurately diagnose common ailments. Furthermore, while patients may trust a dental diagnosis, they may also harbor concerns about how it operates. Therefore, it remains crucial for systems to access sufficiently large datasets, maintain balanced separation, and effectively analyze data to uphold quality standards. Furthermore, ethical considerations must be addressed when utilizing data in AI technologies, aligning with the need for transparency in how these technologies function from the patient’s perspective [3].
With an emphasis on the years beginning in 2010, this scoping study seeks to explore the state of AI-assisted diagnosis in oral health using X-ray images. The goals are to provide an overview of the area’s present level of growth, highlight its limits, and pinpoint the research gaps that must be addressed in order to advance the area.
Among the principal contributions are:
  • Providing a schema for the current X-ray imaging literature;
  • Reviewing the state-of-art of AI models used in dental practice;
  • Analyzing the possibilities, limitations, and future trends in using artificial intelligence in dentistry.
The remaining part of this paper is organized as follows: Section 2 provides an overview of the materials and some methods employed in the study; In Section 3, we delve into the challenges in automated diagnosis of dental diseases, outlining the problems encountered in this domain.; Section 4 explores the various types of dental diseases; Section 5 investigates the approaches to diagnosing dental diseases through X-ray imaging; Section 6 focuses on the crucial aspect of preprocessing to enhance the quality of data; Section 7, Section 8 explores the evolution of AI diagnostic tools in dentistry and addresses the datasets used in articles; Section 9 delves into the relevant Work Experiences; Section 10 delves into the specific role of X-ray imaging in dental diagnostics; in Section 11, we investigate the impact of AI on dental healthcare, exploring the transformative effects of artificial intelligence in this field; Section 12 delves into knowledge gaps and future research directions; and finally, Section 13 concludes with a summary and findings.

2. Materials and Methods

2.1. Protocol

The Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) guidelines were used in the study of the available literature in order to assess the reporting quality of systematic reviews.

2.2. Electronic Search Strategy

Google Scholar was used to conduct a thorough electronic search for every relevant research study in the database between 2010 and 2023. The search focused on publications related to the application of artificial intelligence, deep learning, and neural networks in dentistry. Specifically, we included the following types of materials:
  • Peer-reviewed journal articles;
  • Conference papers;
  • Review articles;
  • Clinical studies;
  • Case studies;
  • Technical reports;
  • Theses and dissertations.

2.3. Eligibility Criteria

2.3.1. Inclusion Criteria

I.
Timeline: Publications over the past 14 years (2010–2023) focused on the application of artificial intelligence, deep learning, and neural networks in dentistry;
II.
Language: All English-language publications were incorporated, regardless of the location of publication;
III.
Data and Outcome: Studies that adequately explain the datasets exploited, supported by explicit reporting of predictive and measurable outcomes to measure the effectiveness of the suggested model.

2.3.2. Exclusion Criteria

I.
Type of data used: research that does not provide precise details on the kinds of data that were used;
II.
Methodology: inadequately documented research on deep learning, machine learning, and computer vision techniques;
III.
Outcome: Studies that failed to document quantifiable results.

2.4. Study Selection and Items Collected

Following the elimination of duplicate papers, the title and abstract were reviewed. Eligibility criteria were used to analyze the papers’ complete text. Finally, the references in the paper were carefully examined.

3. Challenges in Automated Dental Disease Diagnosis

AI-based models for dental disease detection, diagnosis, and prediction have become quite popular recently. However, challenges include aspects such as generality, accessibility, and restricted data availability. In order to address these challenges and assist dental professionals in treatment and prognosis planning, the primary focus of research continues to be on creating effective and precise solutions. AI models may struggle to achieve high sensitivity and specificity due to picture quality and anatomical structural differences [4].
  • The significant quantity of data needed for training, validation, and testing is a significant obstacle in deploying artificial intelligence (AI) for dental caries detection. This involves extremely sensitive patient data, such as dental pictures and medical histories. Data sensitivity is a critical issue due to the personal and private nature of health information, which includes identifiable patient records and medical histories. The misuse of or unauthorized access to this data can lead to severe ethical and privacy violations. Ensuring data privacy and security is paramount to maintaining the trust of patients and upholding ethical standards in AI applications for dentistry diagnostics. Strict adherence to data protection regulations, such as GDPR, and implementing robust encryption and anonymization techniques are essential to safeguarding patient information [5].
  • One major concern revolves around the transparency of artificial intelligence algorithms and data. The reliability of AI systems’ predictions greatly depends on the accuracy of annotations and labeling in the training dataset. Inaccurately labeled data can lead to subpar outcomes. This issue is especially pronounced in clinic-labeled datasets, where the lack of consistent quality further limits the transparency and effectiveness of AI systems [6].

4. Types of Dental Diseases

In Figure 1, we aim to provide a brief background of various oral health conditions discussed in the literature. These conditions encompass a spectrum of issues ranging from common concerns, like dental caries (tooth decay), to more intricate challenges, such as oral cancer.

5. Approaches to Diagnosing Dental Diseases through X-ray Imaging

This section delves into various research investigations aimed at identifying dental pathologies. Figure 2 depicts a process flow diagram that serves as a visual guide for the diagnostic procedures of dental ailments.

6. Preprocessing Techniques for Dental X-ray Pictures

Several preprocessing steps are performed to improve the quality of dental X-ray pictures and facilitate efficient interpretation before they are prepared for analysis. To diversify the dataset, methods such as data augmentation—which involves rotation, scaling, and other operations—are used. To comprehend visual aspects like texture, features such as entropy, intensity, and local binary patterns are extracted.
Resizing an image to a standard format and cropping it to remove information are examples of image modification steps. To remove the upper and lower jaws as well as the teeth and define a particular region of interest, segmentation is utilized. Binary pictures are produced by converting to RGB (red, green, and blue) format and then thresholding them.
Enhancement includes applying different filtering methods, quadtree decomposition, and contrast normalization. Pictures in the dataset that have missing teeth were culled. Refining the dataset involved flipping photos and segmenting them to reduce input sizes. The dataset was supplemented by machine learning techniques such as ImageDataGenerator, and certain photos were kept aside for validation.
The accuracy of dental X-ray detection is increased by techniques including picture sharpening and adaptive histogram equalization. Also, a comprehensive approach to preprocessing is taken, setting the stage for a later study that takes advantage of deep learning models like SqueezeNet and AlexNet.

7. Datasets Used for AI-Based Systems for Dentistry E-Health

This section includes details of the datasets used in some studies as a general review, starting from the earliest to the most recent paper.
Firstly, in 2017, Ref. [7] used experimental datasets from Hanoi Medical University in Vietnam. The dataset was divided into a training dataset and test dataset, and included 56 dental photographs taken between 2014 and 2015.
One year forward, in 2018, the Department of Periodontology at Daejeon Dental Hospital, Won Kwang University, conducted a study [8] that was authorized by the Institutional Review Board of the hospital. The dental hospital’s PACS system provided the image datasets, which were gathered between January 2016 and December 2017. The images were categorized using electronic medical records. The training and validation datasets included 2400 photos (1200 dental caries and 1200 non-dental caries), whereas the test dataset contained 600 images (300 dental caries and 300 non-dental caries). The datasets were balanced in terms of dental caries against non-dental caries. Among the 3000 periapical radiographs in the dataset, approximately a quarter each represented maxillary premolars (25.9%) and maxillary molars (25.6%), while mandibular premolars accounted for 24.1%, and mandibular molars for 24.4%. Dental caries were detected in 29% of premolars and 27% of molars, whereas non-dental caries were observed in 26% of premolars and 23% of molars.
In the same year, Ref. [9] used 1500 panoramic images with considerable variability that were divided into 10 groups. The training data included 193 annotated images that were divided into two groups for training the segmentation network and validating the findings. After training and fine-tuning, the model was tested on 1224 dental arch pictures. Remarkably, the system utilized transfer learning from the Microsoft Common Objects in Context (MSCOCO) dataset to train itself using just 193 buccal pictures.
A year ahead, in 2019, in Ref. [10], a dataset of 1574 anonymized adult panoramic X-ray images was utilized. The images were randomly selected from the archival database at the Reutov Stemmatological Clinic in Russia, covering the period from January 2016 to March 2017. The panoramic X-ray images were captured using the Sirona OrthoPhos XG-3 X-ray apparatus, known for its technological advancements. To ensure methodological rigor, the dataset was divided into two distinct sections: (1) a primary training set and (2) a secondary testing set. The primary training set consisted of 1352 images and was used for the development and refinement of the algorithms for teeth detection and numbering. The secondary testing set comprised 222 images and was specifically employed to evaluate the accuracy and performance metrics of the developed software. This division underscores the meticulous approach taken, ensuring a comprehensive evaluation of the software’s capabilities in automatic teeth detection and numbering.
Further, in the same year, the study [11] by Hu Chen et al. consisted of 1250 digitized dental periapical films as a dataset collected from Peking University School and Hospital of Stomatology. The films were anonymized and digitized at a resolution of 12.5 pixels per mm. An expert dentist annotated each tooth in the images with a bounding box and corresponding tooth number using the Federation Dentaire Internationale (FDI) numbering system. The collection of images was segmented into three parts for the study: a training set with 800 images, a validation set comprising 200 images, and a test set containing 250 images.
Furthermore, in 2019, the dataset in the study [12] by V. Geetha and K. S. Aprameya was obtained from SJM Dental, who compiled a dataset consisting of 49 dental X-ray images featuring caries and 16 images of healthy teeth. These images were captured using a Gendex X-ray unit equipped with an RVG sensor from Sirona. A professional dentist provided annotations of the carious lesions present in the dental radiographs.
Lastly, in 2019, the dataset in the study [13] by Shankeeth Vinayahalingam et al. was sourced from orthopantomograms (OPGs) of individuals who received treatment at the Oral and Maxillofacial Surgery Department of Radboud University Nijmegen Medical Centre in the year 2017. A set of 81 OPGs were randomly divided into two groups: 70% training and 30% validation/testing, and carefully chosen for analysis according to designated inclusion criteria; all images were de-identified to maintain patient confidentiality before the analysis commenced.
Shifting the focus to 2020, the dataset in the study [14] conducted by scholars from the Microelectronics CAD Center at Hangzhou Dianzi University, Hangzhou, China, and the West China School of Stomatology at Sichuan University, comprised 2491 dental X-rays for model training purposes and 138 X-rays for the validation process. The dental radiographs were sorted into six distinct groups, each corresponding to different sequences of tooth numbering.
In that year, the dataset in Ref. [15] by Wenzhe You et al. was gathered from the Department of Pediatric Dentistry at Peking University School and Hospital of Stomatology in Beijing, China. This collection included 886 intraoral photographs of primary teeth, with 709 photos allocated for training and 177 photos earmarked for testing. These images formed the basis for training an artificial intelligence model specifically tailored to the detection of dental plaque.
The dataset in a study [16] conducted in 2020 by Minyoung Chung et al. was sourced in Korea from Osstem Implant Co. Ltd. The collection of data included 818 panoramic dental X-ray images obtained from various dental clinics. Among these images, 574 were utilized for training, 162 were allocated for validation, and 82 were reserved for testing.
Further, in 2020, the dataset used in Ref. [17] consisted of approximately 2000 anonymized panoramic radiography images, collected from three dental clinics. From these, 1000 images were selected for detailed annotation.
In the year that followed, 2021, the dataset used in Ref. [18] was sourced from Kaggle. This dataset comprised visual images representing cavities and non-cavities. It contained a total of 74 images; 60 were used as training data and 14 were used as testing data. The training set covered 45 caries images and 15 non-caries images, while the testing set had 10 caries images and 4 non-caries images. This dataset was utilized for training and testing a deep learning model for classifying dental cavities.
Within the same year, the dataset used in Ref. [19] was specially collected for that research and sanctioned by the Institutional Review Board of National Yang-Ming University. It consisted of 100 teeth with various degrees of caries lesions. These teeth were then disinfected with 5% sodium hypochlorite and kept in distilled water. The dataset was narrowed down by excluding fractures, previously repaired teeth, and teeth with visible cavities. This procedure left 63 teeth for the purpose of examination.
Ref. [20], conducted in 2021, involved a dataset of 78 patients divided into two groups: 37 controls and 41 with temporomandibular disorder (TMD).
Furthermore, focusing on this year, dental panoramic radiographs were taken at Asahi University Hospital as part of normal exams with the QRmaster-P system (Takara Telesystems Corp., Osaka, Japan). The images had a resolution of 3000×1500 pixels and a pixel size of 0.1mm. For that study, 100 pictures were used for training and assessment. Due to the limited dataset, four sets of photographs were randomly selected, and fourfold cross-validation was carried out. Three groups were used for training in each cycle, while the fourth group was used for assessment. For each of the four groups, this process was repeated [21].
Looking ahead to the upcoming year, a study conducted by Andac Imak et al. examined images derived from two private oral and dental health clinics; these images were captured using periapical radiography systems, namely Planmeca Intra from Helsinki, Finland, and MyRay from Imola, Italy. The analysis included 380 periapical pictures from 310 subjects. Forty images were excluded from the analysis because they were unclear or showed fractures, cysts, or infections. The total collection consisted of 340 periapical pictures. The patient-derived dataset included radiographs of 150 images depicting carious teeth, while 190 depicted non-carious teeth [22].
JK, an experienced dentist, used sophisticated equipment in 2022 to capture images for earlier research and documentation in the study “Caries Detection on Intraoral Images Using Artificial Intelligence”. To avoid condensation, intraoral reflectors were utilized when photographing molar teeth. Quality control entailed eliminating insufficient or duplicated pictures. The images were modified for quality and standardized in size. The study concentrated on healthy or carious surfaces and excluded non-carious flaws. A total of 2417 high-quality clinical photographs were included, representing 1317 occlusal surfaces and 1100 smooth surfaces [23].
Furthermore, in 2022, panoramic X-rays (OPGs) obtained from clinics were incorporated into the study in [24]. A DSLR was used for some of the OPGs, while clinics supplied digital copies for others. Each of the high-quality pictures, which made up a unique dataset of 1200 pictures, divided into 70% training and 30% testing images, showed people with various dental diseases and ages.
Exploring further within the year 2022, the dental image dataset used in Ref. [25] included panoramic dental X-ray images from a total of 116 people. Moreover, 90% of the data were taken as the training dataset, and 10% of the data were used for validation purposes. These images were obtained from the Medical Imaging Center in Iran and were anonymized and unidentified. The dataset encompassed various dental conditions, ranging from teeth in good condition to those with partial and complete edentulism. The different conditions of the cases were manually segmented by two dentists.
Reference [26], in 2022, involved 445 CBCT scans, from which 890 maxillary sinuses were examined. Manual segmentation was performed on the air space, MT, and MRCs in each sinus. High accuracy in detecting MT and MRCs was achieved by a three-step CNN algorithm trained on low-dose CBCT scans. The results demonstrated by the algorithm displayed comparable performance on both low-dose and full-dose CBCT scans. The findings suggest the CNN algorithm’s potential for accurate detection and segmentation of MT and MRCs in the maxillary sinus using CBCT scans.
In 2022, Ref. [27] aimed to enhance the accuracy of diagnosing dental caries in children using panoramic radiographs. It utilized panoramic radiographs of 94 patients who did not have any caries and 210 patients who had one or more instances of dental caries. Of the total dataset of 6028 teeth, 3039 teeth had caries, while 2989 teeth were free from caries. The panoramic radiographs, formatted in JPEG, had an approximate size of 2441 × 1150 pixels.
Lastly, in 2022, the dataset used in Ref. [28] comprised a collective sum of 1432 images of dental conditions. These images were tagged and manually analyzed by an expert.
In a different context of the same year, Ref. [29] developed a framework using CNNs for diagnosing dental ailments in panoramic radiographs (PRs). The framework achieved high specificity for various dental diseases, such as caries, impacted teeth, missing teeth, residual roots, and full crowns. That study utilized a dataset of 2278 PRs, consisting of 1996 PRs for training and 282 PRs for evaluation, sourced from the Stomatology Hospital of Zhejiang Chinese Medical University (not publicly available due to privacy restrictions). It highlighted the potential of CNN-based AI in improving dental diagnostics using PRs, leveraging a large dataset for thorough assessment.
As for the papers not previously addressed, namely [30,31,32,33,34,35,36,37,38,39,40], it is evident that a significant limitation present in these studies is the lack of comprehensive details regarding the dataset employed. These articles notably omitted crucial information that would otherwise contribute to a more thorough understanding and evaluation of their respective research methodologies and findings.

8. Evolution of AI-Diagnostic Tools in Dentistry

A general overview of the AI techniques and architectures employed for various dental data types is provided as follows:

8.1. Convolutional Neural Networks (CNNs)

In dental applications, CNNs are extensively used to analyze image data. They are composed of convolution layers, pooling layers, and fully connected layers. The following are the primary CNN architectures used:

8.1.1. Faster R-CNN

An advanced object detection model that integrates a region proposal network (RPN) with a CNN. The RPN generates region proposals that are then classified and refined by CNN to detect objects and their bounding boxes within an image [7]. Figure 3 shows the Faster R-CNN structure.

8.1.2. ResNet

A deep convolutional neural network architecture that has shown promise in analyzing dental images for various tasks, including caries diagnosis and segmentation, and consists of multiple layers of convolutional and pooling operations [27]. Figure 4 shows the original ResNet-18 architecture.

8.2. NASNetMobile

A lightweight version of the NASNet model designed for mobile and embedded applications. It automatically finds efficient neural network architectures. When combined with Inception V4, it enhances classification performance by generating feature vectors from both networks and using a two-layer fully connected network for final classification [14]. Figure 5 shows a mixed model of the Inception V4 + NASNetMobile structure.

8.3. YOLOv3

A leading model for object detection and classification tasks. Implemented with the darknet framework, it features 53 layers, including residual blocks from Darknet-53, which enhances its ability to capture intricate image features effectively [24]. Figure 6 shows the YOLOv3 network architecture.

9. Relevant Work Experiences

This decade has seen a tremendous development in dental diagnostic tools due to advances in artificial intelligence (AI), machine learning, and automated image analysis. It all started in 2010, when Pedro H. M. Lira and Gilson A. Giraldi published their seminal work on automating the examination of panoramic X-ray images for dental purposes, with an emphasis on feature extraction and teeth segmentation [30]. But difficulties like handling teeth that overlapped and the absence of comprehensive dataset information brought attention to the first obstacles. Later research examined the use of artificial neural networks (ANNs) in orthodontic decision-making [31], delving into the subtleties of determining whether extractions are necessary for children of a certain age. Despite attaining an 80% accuracy rate, questions were raised regarding the system’s generalizability to various age groups and orthodontic issues.
Years ahead, in pursuit of increasingly advanced diagnostic instruments, Ainas A. ALbahbah et al. (2016) employed artificial neural networks (ANNs) and histograms of oriented gradients (HOGs) to identify dental caries [32]. Despite showing encouraging results, the study’s generalizability and trustworthiness were called into question by the absence of comprehensive dataset information.
Moving into 2017, using image processing techniques, Jufriadif Na’am et al. took on the task of improving the detection of caries between dental X-ray images of adjacent teeth [33]. Although the study showed how technology might improve accuracy, it lacked details on datasets and how effective it is in comparison with other methods.
In the same year, Tran Manh Tuan et al.’s consultant system [7], which used fuzzy C-Means clustering and the fuzzy inference system (FIS), pioneered the use of fuzzy rule-based systems for dental diagnostics. The study demonstrated methodology strengths, but encountered constraints in dataset size and specificity.
We now shift the focus to Jae-Hong Lee et al. (2018), who evaluated deep CNN models for the identification and diagnosis of dental cavities on peri-apical radiographs. In [8], the landscape moved substantially. While the study achieved strong diagnostic accuracies, particularly for premolars, it had limitations due to its exclusive focus on permanent teeth and the lack of clinical characteristics.
Progressing further in the same year, Gil Jader et al. harnessed the power of deep learning techniques, notably employing the Mask R-CNN architecture, to achieve precise tooth segmentation in panoramic X-ray images marking a significant breakthrough in the field with remarkable accuracy metrics [9]. However, despite the groundbreaking nature of their work, the absence of a publishing date and limited dataset details posed significant limitations.
Using dental radiographs and the Faster R-CNN architecture for tooth recognition, in 2019, Tuzoff et al. achieved amazing sensitivity and accuracy that were on par with expert ratings [10]. Even with the study’s success, its comprehensibility might be improved by a more thorough explanation of the preprocessing stages.
The year 2019 also witnessed Hu Chen et al.’s innovative Faster R-CNN methodology for the detection of teeth in dental periapical films, which demonstrated great recall and accuracy rates and real-world potential [11]. Using KNN as a classifier, V. Geethan and K. S. Aprameya, in the same year, elevated machine learning to a new level by concentrating on dental caries diagnosis in dental radiographs [12]. The excellent precision and accuracy of the method highlighted machine learning’s promise in automated computer-assisted diagnosis systems.
Delving deeper into the same year’s discoveries, using a CNN based on the U-net architecture, Vinayahalingam et al. experimented with the automatic detection and segmentation of M3 and IAN in dental panoramic radiographs [13]. Although the study was noteworthy for its innovative application of deep learning, it lacked detailed feature extraction standards.
Moving forward to the subsequent year, 2020, an automatic dental X-ray detection technique using a hybrid multi-convolution neural network (CNN) and adaptive histogram equalization was presented by Yaqi Wang et al. [14]. Achieving good accuracy, specificity, and AUC, the study indicated a noteworthy step forward, but with a lack of explicit neural network architecture specifications.
Even with these developments, certain research in 2020 remained unclear, including a study that investigated the utilization of deep learning-based artificial intelligence (AI) models for the identification of dental plaque on primary teeth. Notwithstanding improvements in diagnostic precision, the study’s stated shortcomings included a small sample size, a narrowly focused analysis, and a lack of knowledge about the CNN architecture [15].
On the other hand, Minyoung Chung et al. (2020) presented a novel technique for identifying individual teeth automatically in dental panoramic X-ray images [16]. The method demonstrated a notable improvement in performance over earlier approaches by combining spatial distance regularization (DR) loss with point-by-point localization.
Within the same year, J. Bianchi et al. investigated how artificial intelligence (AI), and specifically deep learning, could revolutionize dental and maxillofacial radiology (DMFR). They emphasized the efficacious outcomes that convolutional neural networks (CNNs) achieved in a range of dental imaging tasks [34]. The study highlighted the potential revolution in dental diagnostics while pointing out limitations, such as the lack of huge datasets and creating universally agreed ground truth for AI validation.
In 2020, using panoramic X-ray images, Mircea Paul Muresan et al. took on the task of creating a novel method for automated tooth detection and dental issue categorization [17]. The approach showed strengths in precise tooth segmentation and the detection of dental defects by focusing on 14 different dental disorders. Potential drawbacks do, however, include the requirement for increased accuracy.
As we move forward into 2021, Sonavane, A., et al. concentrated on cavity detection with a CNN-based approach, making use of a Kaggle dataset [18]. Although the study’s greatest accuracy was 71.43%, it was observed that accuracy might be further increased by expanding the dataset. Using optical coherence tomography (OCT) images, Huang and Lee proposed a caries detection study that same year [19]. Using the ResNet-152 architecture, they were able to successfully detect caries with a 95.21% success rate.
Continuing with 2021, Alvaro López-Janeiro et al. successfully directed the morphological approach to diagnosis by developing a ML algorithm for enhancing the diagnostic performance for malignant salivary gland tumors [35]. Limitations included a small sample size and a lack of external validation.
The landscape of AI in dentistry continued to evolve in 2021 as researchers Jonas Almeida Rodrigues, Joachim Krois, and Falk Schwendicke examined the advancements in AI technology, specifically neural networks, or “deep learning” [36]. To address problems such as the lack of generalizability and robustness in many studies, the study stressed the significance of user comprehension and critical assessment of AI applications.
A number of studies examined various aspects of AI in dentistry in 2021. A study examined the efficacy of KNN, SVM, and MLP classifiers in machine learning attribute extraction techniques for the diagnosis of TMD using IT [20].
Nevertheless, the study identified some drawbacks, including a limited sample size, poor generalizability, and a lack of external validation. Another study [37] provided a general review of the many artificial intelligence (AI) approaches utilized in dentistry, but it lacked particular examples of AI applications and comprehensive information on the process and results.
In the same year, Alexey N. Subbotin proposed a smartphone app for dentists to see panoramic X-ray images, with the goal of improving the display of panoramic dentistry photographs using computer technology, fog computing environments, and cloud technologies [38]. However, crucial information, such as machine learning classifiers, dataset size, and performance metrics, were missing.
Another study to review for this year is that by Chisako Muramatsu et al. The study investigated a CNN-based method for teeth identification and categorization in dental panoramic radiographs, and it achieved good sensitivity and classification accuracy [21]. The study’s weaknesses included the small dataset and the absence of molars.
The progress continued in 2022 as ANDAC IMAK et al. proposed a unique technique for detecting caries automatically using periapical images [22]. The researchers used a MI-DCNNE with a score-based grouping method to achieve excellent accuracy while recognizing dataset limits.
Furthermore, in 2022, J. Kühnisch et al. investigated artificial intelligence in dental diagnostics, specifically employing CNNs to identify and describe caries from intraoral images [32]. The study exhibited an amazing ability to detect cavities while also recognizing the need for further refinement of the technique.
During that same year, Patil et al. [39] explored the applications and pitfalls of AI in diagnosing oral diseases. Their review emphasized the utility of AI in diagnosing dental caries, maxillary sinus diseases, periodontal diseases, salivary gland diseases, TMJ disorders, and oral cancer through clinical data and diagnostic images. They highlighted the immense potential of AI in improving diagnostic accuracy, reducing costs, and minimizing human errors. However, they also pointed out challenges, including the need for larger datasets and the integration of AI into routine clinical practice, which remains a significant hurdle.
Furthermore, in 2022, De Angelis et al. [40] advanced the field further by evaluating the performance of AI diagnostic software named Apox in analyzing panoramic X-rays. Their study demonstrated the AI system’s ability to accurately identify dental structures, such as dental formulae, implants, prosthetic crowns, fillings, and root remnants. The results showed high overall sensitivity (0.89) and specificity (0.98), emphasizing the software’s reliability in dental diagnostics. However, challenges remained, particularly in detecting radiolucent materials such as fillings and residual roots, which impacted sensitivity. This study underscored the significant progress made in AI-based dental diagnostics while highlighting areas for further improvement to enhance comprehensive diagnostic accuracy.
Also, in the same year, Yassir Edrees Almalki et al. suggested utilizing YOLOv3, a deep learning model, to identify and categorize dental disorders, such as cavities, root canals, dental crowns, and broken-down root canals [24]. The study was highly accurate, although it had some drawbacks, including a small sample size and a limited representation of the total population.
During that same year, Abdullah S. AL-Malaise et al. contributed to the development of a method for categorizing dental X-ray images, distinguishing between cavities, fillings, and implants [25]. The study produced an accuracy rate of 96.51% using a NASNet model with data augmentation, demonstrating strengths in reaching remarkable accuracy with a small sample size.
Another study in 2022 proposed and evaluated a CNN algorithm for detecting and segmenting MT and MRCs in the maxillary sinus automatically using low-dose and full-dose CBCT [26]. Demonstrating high accuracy, the study acknowledged drawbacks, including manual segmentation, an inadequate sample size, and lack of external validation. Additionally, Zhou et al. (2022) sought to enhance traditional CNNs for more accurate identification of dental caries in children on panoramic radiographs, proposing a context-aware CNN that considers information from neighboring teeth [27].
Another work published in the same year focused on the creation of an AI-based system for identifying periodontal bone loss in dental images [28]. The study showed great accuracy using deep learning architectures, namely AlexNet and SqueezeNet, with Linear SVM outperforming other classifiers. However, no detailed information regarding the obtained features or EfficientNetB5’s classifier was provided, and the evaluation methods and potential constraints were not made obvious, limiting a comprehensive understanding of the study’s findings.
The landscape continued to evolve, with Zhu et al. (2023) embarking on the development of an AI framework tailored for diagnosing various dental ailments using PRs [29]. Utilizing deep CNNs, specifically the BDU-Net and nnU-Net models, the team aimed to enhance the diagnostic efficiency and accuracy of dental pathologies on PRs. The study’s diagnostic performance was evaluated based on the AUC, Youden’s index, specificity, and sensitivity. The AI framework demonstrated high specificity across multiple dental diseases, such as full crowns, impacted teeth, residual roots, caries, and missing teeth, with AUC values ranging from 0.772 for caries to 0.980 for impacted teeth. That study was significant for its thorough assessment of an AI framework across numerous dental diseases, indicating its promise in dental diagnosis.

10. Role of X-ray Imaging in Dental Diagnostics

In 2018, Jae-Hong Lee et al. performed a study to assess the efficacy of deep CNN methods for identifying and diagnosing dental cavities on periapical radiographs. Their methodology included the utilization of a pre-trained CNN model, specifically Google Net Inception v3, for initial data processing. Additionally, they employed transfer learning techniques to train their dataset. This investigation focused on leveraging deep learning techniques in dental diagnostics, specifically in the realm of detecting and diagnosing dental caries using X-ray images [8].
Moving into 2019, Vinayahalingam et al. [13] aimed to develop an automated system for detecting and segmenting M3 and IAN on OPGs. Utilizing a CNN based on the U-net architecture, the study showcased the application of deep learning techniques in addressing clinical challenges within dental diagnostics. The emphasis was on automating the recognition and segmentation of dental structures through the utilization of dental X-ray images.
Moving to 2020, Yaqi Wang et al. [14] introduced an automated dental X-ray detection methodology utilizing adaptive histogram equalization and a hybrid multi-convolutional neural network (CNN). This innovative approach addressed the critical role of dental radiograph analysis in clinical diagnostics, where expert interpretation includes tasks such as teeth detection and numbering. The study utilized a dataset comprising 2491 images for training and 138 for testing, categorized into six classes based on different tooth number sequences. To enhance the detection accuracy, three pre-processing techniques were applied: image sharpening and median filtering to remove impulse noise and enhance edges, adaptive histogram equalization to mitigate excessive amplification noise, and a multi-CNN hybrid model for classifying the six different locations of dental slices. The results demonstrated that the accuracy and specificity of the test set exceeded 90%. Additionally, four dentists independently annotated the test dataset.
Moreover, in 2020, Minyoung Chung, Jusang Lee, Sanguk Park, et al. [16] directed their focus to automating the identification and detection of individual teeth in dental panoramic X-ray images. Their proposed technique combined point-by-point localization with spatial distance regularization (DR) loss, offering a potential tool for dental clinics without the need for additional identification algorithms. This research made a valuable contribution to the field of dental diagnostics by introducing a novel method for automating the identification of individual teeth in X-ray images.
In the same year, Mircea Paul Muresan et al. [17] aimed to create a unique technique for automated tooth recognition and dental problem categorization using panoramic X-ray images. Leveraging a deep CNN for semantic segmentation and employing various image processing techniques, the study focused on the accurate identification and categorization of dental abnormalities in panoramic X-ray images. The dataset, composed of panoramic radiographs from different dental clinics, emphasized the utilization of image processing and deep learning methodologies in the field of dental diagnostics.
Fast-forwarding to 2022, Anada Imak et al. [22] proposed a novel method for automatically detecting dental caries using periapical images. Employing a MI-DCNNE model, the study sought to overcome the limitations of manual diagnosis by dentists. Measures such as accuracy, specificity, sensitivity, F1-score, and precision were employed for assessing the model’s performance in dental caries detection. The dataset, comprising 340 periapical images from private oral and dental health clinics, underscored the use of deep learning methods in automating the detection of dental caries using X-ray images.
The comprehensive review in Table 1 serves as a brief, yet complete, repository of essential views and conclusions from a collection of papers published between 2010 and 2023, all of which are concerned with AI and machine learning areas of interest. The table, which spans numerous pages, covers key aspects in a structured way, providing a thorough evaluation of the study components. A summary of each category represented in Table 1 is provided below in Figure 7.

11. Impact of AI in Dental Healthcare

The use of artificial intelligence (AI) in dental healthcare has enormous promise, making use of advances in digital imaging and electronic health data. The debate over AI’s revolutionary influence on dental practice is lively, but worries remain regarding whether AI will eventually replace the function of oral healthcare professionals. AI excels in simplifying operations and assisting diagnostics due to its ability to use structured knowledge and extract insights from large datasets. However, it falls short of recreating the human brain’s sophisticated connections and complicated decision-making skills. A complete knowledge that relies on the skills of dental professionals becomes vital in the field of dental healthcare. Dentists’ capacity to perform physical examinations, integrate medical histories, appraise aesthetic outcomes, and engage in meaningful talks remains unrivaled, particularly in situations marked by uncertainty. A machine-led approach to dental treatment is devoid of the human touch that is essential in clinical care. Clinical intuition, intuitive sense, and empathy are critical in providing customized healthcare while maintaining professionalism. While there has been discussion about incorporating empathy into AI algorithms, allowing affective robotics to express artificial emotions, it is critical to emphasize that effective communication between dental healthcare providers and patients involves nonverbal evaluation of hopes, fears, and expectations. These intuitive and spontaneous communication channels, which are essential to the deepest aspects of human-to-human connection, cannot be duplicated in computer language.
The transformative impact of artificial intelligence (AI) and intelligence augmentation (IA) technologies in dental healthcare was explored by Hassani et al. [43]. Their study advocates for a future where smart dentistry is characterized by the harmonious blend of AI and IA, leading to improved patient outcomes, enhanced efficiency, and more personalized dental care. This study shows that AI and IA are designed to enhance the intelligence and efficiency of dentists, rather than pose a threat to job opportunities within the dentistry sector.
As a result, AI should be viewed as a supplement, augmenting and occasionally relieving oral healthcare workers to focus on higher-value duties. These may involve synthesizing patient information, enhancing professional contacts, and remaining current on changes in the dental healthcare sector. The progress of dental pedagogy should match with the continual development of AI, providing a harmonic integration that educates future practitioners to harness technology while maintaining the human-centric parts of their practice.

12. Knowledge Gaps and Future Research Directions

As research studies enhance dental diagnostics with artificial intelligence, our analysis has pinpointed critical knowledge deficits that warrant attention. Primarily, the dataset quality and volume in contemporary research are insufficient, thereby constraining the dependability of AI-driven dental diagnostic tools. Moreover, the validation of AI instruments by dental practitioners in authentic clinical contexts is often wanting, which leads to uncertainties regarding their practical implementation. Additionally, there is a shortfall in the clarity and transparency of AI methodologies within the dental sector, which can impede the professional uptake of these advancements. Research concerning AI-facilitated dental diagnostics is also restricted to a handful of dental conditions, limiting its breadth and prospective influence.
AI’s progress in dental diagnostics necessitates addressing numerous crucial aspects to assure the technology’s usefulness, dependability, and acceptability in clinical practice. First, increasing dataset quality and quantity is critical. This entails investing in the acquisition and curation of big, diverse, and high-quality information, which will allow AI models to learn more accurately and generalize more broadly across populations. Collaborative clinical validation is also required, which encourages cooperation between AI developers and dental practitioners to undertake complete clinical trials. This real-world testing and iterative improvement will improve the practical applicability and dependability of AI technologies while also instilling trust in dental practitioners. Furthermore, demystifying AI technology through educational programs and tools designed specifically for dental professionals would increase comprehension and enable adoption into daily practice. Expanding the scope of dental problems researched by widening research efforts to encompass both common and unusual illnesses would increase the adaptability of AI diagnostic tools. Interdisciplinary collaboration among AI specialists, dental researchers, practitioners, and other stakeholders is required to create creative solutions and a comprehensive strategy for incorporating AI into dentistry. Establishing explicit ethical principles and legal frameworks will guarantee that AI is developed and deployed responsibly, addressing concerns like data protection and responsibility. Finally, building user-friendly AI diagnostic tools with simple interfaces and seamless interaction with existing dental practice systems would help dental professionals to accept and use them more easily. Focusing on these future areas might help the field of AI-enhanced dentistry diagnostics to overcome present constraints and pave the way for more effective, efficient, and widely accepted AI applications, ultimately leading to significant advancements in dental care.

13. Conclusions

In conclusion, this exhaustive review highlights the significant progress and challenges associated with integrating artificial intelligence (AI) within the realm of dental medicine. AI’s advent has been a game-changer for interpreting dental radiographs and identifying oral health issues. Nonetheless, various hurdles impede the full exploitation of AI’s capabilities in enhancing dental care. A key obstacle is the limited availability and variable quality of data, upon which the success of AI models is heavily contingent. Specifically, there is a lack of data on rare dental diseases, complex cases, high-quality annotated radiographs, and procedural variations. Data security and privacy also play a crucial role, given the sensitive nature of dental records and imagery.
AI algorithm transparency and intelligibility remain considerable barriers, impacting the accuracy and reliability of AI systems due to inconsistent data labeling and non-uniform annotations. Therefore, the development of clear, accountable AI frameworks is of the essence. Documenting AI’s journey in dentistry, this review acknowledges initial endeavors and subsequent breakthroughs with advanced learning algorithms, emphasizing AI’s promise in improving diagnostic precision and efficiency in oral health care. It highlights that artificial intelligence (AI) is a supplement to the valuable human aspects that dental professionals afford. Instead, AI should be used to help dental practitioners to make clinical decisions. Highlighting knowledge gaps, this study emphasizes the need for large, high-quality datasets, effective validation of AI tools in practical settings, more transparency, and greater AI use across a variety of dental difficulties.
Furthermore, this paper points out the limits of AI in catching nonverbal signs, highlighting the need for human empathy and intuition in dental treatment. While AI can evaluate data and aid in diagnoses with great precision, it cannot read delicate nonverbal signs, such as a patient’s body language, facial expressions, and emotional responses. These indicators are critical for recognizing patient discomfort, anxiety, and pain, which have a substantial impact on the quality of care. Human dentists may respond to nonverbal cues with empathy and change their approach accordingly, fostering trust and providing a more individualized treatment experience. Addressing these flaws is critical for exploiting the totality of AI’s capabilities in the dentistry industry, ensuring that AI enhances, rather than reduces, the quality of patient care.

Author Contributions

Conceptualization, D.M.; Methodology, D.M., H.A., F.B., G.A., N.A. and J.A.; Formal Analysis, H.A., F.B., G.A., N.A. and J.A.; Writing—Original Draft Preparation, H.A., F.B., G.A., N.A. and J.A. Writing—Review, D.M. and M.I.A.; Supervision, D.M. and M.I.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Shan, T.; Tay, F.R.; Gu, L. Application of Artificial Intelligence in Dentistry. J. Dent. Res. 2021, 100, 232–244. [Google Scholar] [CrossRef] [PubMed]
  2. Martins, M.V.; Baptista, L.; Luís, H.; Assunção, V.; Araújo, M.-R.; Realinho, V. Machine Learning in X-ray Diagnosis for Oral Health: A Review of Recent Progress. Computation 2023, 11, 115. [Google Scholar] [CrossRef]
  3. Mahdi, S.S.; Battineni, G.; Khawaja, M.; Allana, R.; Siddiqui, M.K.; Agha, D. How Does Artificial Intelligence Impact Digital Healthcare Initiatives? A Review of AI Applications in Dental Healthcare. Int. J. Inf. Manag. Data Insights 2023, 3, 100144. [Google Scholar] [CrossRef]
  4. Shafi, I.; Fatima, A.; Afzal, H.; Díez, I.d.l.T.; Lipari, V.; Breñosa, J.; Ashraf, I. A Comprehensive Review of Recent Advances in Artificial Intelligence for Dentistry E-Health. Diagnostics 2023, 13, 2196. [Google Scholar] [CrossRef] [PubMed]
  5. Anil, S.; Porwal, P.; Porwal, A. Transforming Dental Caries Diagnosis Through Artificial Intelligence-Based Techniques. Cureus 2023, 15, 7. [Google Scholar] [CrossRef] [PubMed]
  6. Lee, J.H.; Kim, D.H.; Jeong, S.N.; Choi, S.H. Use of artificial intelligence in dentistry: Current clinical trends and research advances. J. Can. Dent. Assoc. 2021, 87, 1488–2159. [Google Scholar]
  7. Tuan, T.M.; Duc, N.T.; Van Hai, P. Dental Diagnosis from X-Ray Images using Fuzzy Rule-Based Systems. Int. J. Fuzzy Syst. Appl. 2017, 6, 1–16. [Google Scholar]
  8. Lee, J.H.; Kim, D.H.; Jeong, S.N.; Choi, S.H. Detection and diagnosis of dental caries using a deep learning-based convolutional neural network algorithm. J. Dent. 2018, 77, 106–111. [Google Scholar] [CrossRef] [PubMed]
  9. Jader, G.; Fontineli, J.; Ruiz, M.; Abdalla, K.; Pithon, M.; Oliveira, L. Deep Instance Segmentation of Teeth in Panoramic X-Ray Images. In Proceedings of the 2018 31st SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI), Parana, Brazil, 29 October 2018–1 November 2018; pp. 400–407. [Google Scholar]
  10. Tuzoff, D.; Tuzova, L.; Bornstein, M.; Krasnov, A.; Kharchenko, M.; Nikolenko, S.; Sveshnikov, M.; Bednenko, G. Tooth detection and numbering in panoramic radiographs using convolutional neural networks. Dentomaxillofacial Radiol. 2019, 48, 20180051. [Google Scholar] [CrossRef] [PubMed]
  11. Chen, H.; Zhang, K.; Lyu, P.; Li, H.; Zhang, L.; Wu, J.; Lee, C.H. A deep learning approach to automatic teeth detection and numbering based on object detection in dental periapical films. Sci. Rep. 2019, 9, 3840. [Google Scholar] [CrossRef]
  12. Geetha, V.; Aprameya, K.S. Dental Caries Diagnosis in X-ray Images using KNN Classifier. Indian J. Sci. Technol. 2019, 12, 5. [Google Scholar] [CrossRef]
  13. Vinayahalingam, S.; Xi, T.; Bergé, S.; Maal, T.; De Jong, G. Automated detection of third molars and mandibular nerve by deep learning. Sci. Rep. 2019, 9, 9007. [Google Scholar] [CrossRef] [PubMed]
  14. Wang, Y.; Sun, L.; Zhang, Y.; Lv, D.; Li, Z.; Qi, W. An Adaptive Enhancement Based Hybrid CNN Model for Digital Dental X-ray Positions Classification. arXiv 2020, arXiv:2005.01509. [Google Scholar]
  15. You, W.; Hao, A.; Li, S.; Wang, Y.; Xia, B. Deep learning-based dental plaque detection on primary teeth: A comparison with clinical assessments. BMC Oral Health 2020, 20, 141. [Google Scholar] [CrossRef] [PubMed]
  16. Chung, M.; Lee, J.; Park, S.; Lee, M.; Lee, C.E.; Lee, J.; Shin, Y.-G. Individual tooth detection and identification from dental panoramic X-ray images via point-wise localization and distance regularization. Artif. Intell. Med. 2021, 111, 101996. [Google Scholar] [CrossRef] [PubMed]
  17. Muresan, M.P.; Barbura, A.R.; Nedevschi, S. Teeth detection and dental problem classification in panoramic X-ray images using deep learning and image processing techniques. In Proceedings of the IEEE 16th International Conference on Intelligent Computer Communication and Processing (ICCP), Cluj-Napoca, Romania, 3–5 September 2020; pp. 457–463. [Google Scholar]
  18. Sonavane, A.; Yadav, R.; Khamparia, A. Dental cavity classification of using convolutional neural network. IOP Conf. Ser. Mater. Sci. Eng. 2021, 1022, 012116. [Google Scholar] [CrossRef]
  19. Huang, Y.P.; Lee, Y.S. Deep Learning for Caries Detection using Optical Coherence Tomography. medRxiv 2021. [CrossRef]
  20. Diniz de Lima, E.; Souza Paulino, J.A.; Lira de Farias Freitas, A.P.; Viana Ferreira, J.E.; Silva Barbosa, J.S.; Bezerra Silva, D.F.; Meira Bento, P.; Araújo Maia Amorim, A.M.; Pita Melo, D. Artificial intelligence and infrared thermography as auxiliary tools in the diagnosis of temporomandibular disorder. Dentomaxillofacial Radiol. 2021, 51, 20210318. [Google Scholar] [CrossRef] [PubMed]
  21. Muramatsu, C.; Morishita, T.; Takahashi, R.; Hayashi, T.; Nishiyama, W.; Ariji, Y.; Zhou, X.; Hara, T.; Katsumata, A.; Ariji, E.; et al. Tooth Detection and Classification on Panoramic Radiographs for Automatic Dental Chart Filing: Improved Classification by Multi-Sized Input Data. Oral Radiol. 2021, 37, 13–19. [Google Scholar] [CrossRef]
  22. Imak, A.; Celebi, A.; Siddique, K.; Turkoglu, M.; Sengur, A.; Salam, I. Dental Caries Detection Using Score-Based Multi-Input Deep Convolutional Neural Network. IEEE Access 2022, 10, 18320–18329. [Google Scholar] [CrossRef]
  23. Kühnisch, J.; Meyer, O.; Hesenius, M.; Hickel, R.; Gruhn, V. Caries Detection on Intraoral Images Using Artificial Intelligence. J. Dent. Res. 2022, 101, 158–165. [Google Scholar] [CrossRef] [PubMed]
  24. Almalki, Y.E.; Din, A.I.; Ramzan, M.; Irfan, M.; Aamir, K.M.; Almalki, A.; Alotaibi, S.; Alaglan, G.; Alshamrani, H.A.; Rahman, S. Deep Learning Models for Classification of Dental Diseases Using Orthopantomography X-ray OPG Images. Sensors 2022, 22, 7370. [Google Scholar] [CrossRef] [PubMed]
  25. AL-Ghamdi, A.; Ragab, M.; AlGhamdi, S.; Asseri, A.; Mansour, R.; Koundal, D. Detection of Dental Diseases through X-Ray Images Using Neural Search Architecture Network. Comput. Intell. Neurosci. 2022, 2022, 3500552. [Google Scholar] [CrossRef] [PubMed]
  26. Hung, K.F.; Ai, Q.Y.H.; King, A.D.; Bornstein, M.M.; Wong, L.M.; Leung, Y.Y. Automatic Detection and Segmentation of Morphological Changes of the Maxillary Sinus Mucosa on Cone-Beam Computed Tomography Images Using a Three-Dimensional Convolutional Neural Network. Clin. Oral Investig. 2022, 26, 3987–3998. [Google Scholar] [CrossRef] [PubMed]
  27. Zhou, X.; Yu, G.; Yin, Q.; Liu, Y.; Zhang, Z.; Sun, J. Context Aware Convolutional Neural Network for Children Caries Diagnosis on Dental Panoramic Radiographs. Comput. Math. Methods Med. 2022, 2022, 6029245. [Google Scholar] [CrossRef]
  28. Sunnetci, K.M.; Ulukaya, S.; Alkan, A. Periodontal Bone Loss Detection Based on Hybrid Deep Learning and Machine Learning Models with a User-Friendly Application. Biomed. Signal Process. Control 2022, 77, 103844. [Google Scholar]
  29. Zhu, J.; Chen, Z.; Zhao, J.; Yu, Y.; Li, X.; Shi, K.; Zhang, F.; Yu, F.; Shi, K.; Sun, Z.; et al. Artificial Intelligence in the Diagnosis of Dental Diseases on Panoramic Radiographs: A Preliminary Study. BMC Oral Health 2023, 23, 358. [Google Scholar] [CrossRef]
  30. Lira, P.; Giraldi, G.; Neves, L.A. Segmentation and Feature Extraction of Panoramic Dental X-Ray Images. In Nature-Inspired Computing Design, Development, and Applications; IGI Global: Hershey, PA, USA, 2010; Volume 1, pp. 306–320. [Google Scholar]
  31. Xie, X.; Wang, L.; Wang, A. Artificial neural network modelling for deciding if extractions are necessary prior to orthodontic treatment. Angle Orthod. 2010, 80, 262–266. [Google Scholar] [CrossRef]
  32. ALbahbah, A.A.; El-Bakry, H.M.; Abd-Elgahany, S. Detection of Caries in Panoramic Dental X-ray Images. Int. J. Electron. Commun. Comput. Eng. 2016, 7, 250–256. [Google Scholar]
  33. Na’am, J.; Harlan, J.; Madenda, S.; Wibowo, E.P. Image Processing of Panoramic Dental X-Ray for Identifying Proximal Caries. TELKOMNIKA (Telecommun. Comput. Electron. Control) 2017, 15, 702–708. [Google Scholar] [CrossRef]
  34. Leite, A.F.; Vasconcelos, K.F.; Willems, H.; Jacobs, R. Radiomics and machine learning in oral healthcare. PROTEOMICS—Clin. Appl. 2023, 14, 1900040. [Google Scholar]
  35. López-Janeiro, Á.; Cabañuz, C.; Blasco-Santana, L.; Ruiz-Bravo, E. A tree-based machine learning model to approach morphologic assessment of malignant salivary gland tumors. Ann. Diagn. Pathol. 2021, 56, 151869. [Google Scholar] [CrossRef] [PubMed]
  36. Rodrigues, J.A.; Krois, J.; Schwendicke, F. Demystifying artificial intelligence and deep learning in dentistry. Braz. Oral Res. 2021, 35. [Google Scholar] [CrossRef] [PubMed]
  37. Babu, A.; Onesimu, J.A.; Sagayam, K.M. Artificial Intelligence in dentistry: Concepts, Applications and Research Challenges. E3S Web Conf. 2021, 297, 01074. [Google Scholar]
  38. Subbotin, A. Applying Machine Learning in Fog Computing Environments for Panoramic Teeth Imaging. In Proceedings of the 2021 XXIV International Conference on Soft Computing and Measurements (SCM), St. Petersburg, Russia, 26–28 May 2021; pp. 237–239. [Google Scholar] [CrossRef]
  39. Patil, S.; Albogami, S.; Hosmani, J.; Mujoo, S.; Kamil, M.A.; Mansour, M.A.; Abdul, H.N.; Bhandi, S.; Ahmed, S.S.S.J. Artificial Intelligence in the Diagnosis of Oral Diseases: Applications and Pitfalls. Diagnostics 2022, 12, 1029. [Google Scholar] [CrossRef] [PubMed]
  40. De Angelis, F.; Pranno, N.; Franchina, A.; Di Carlo, S.; Brauner, E.; Ferri, A.; Pellegrino, G.; Grecchi, E.; Goker, F.; Stefanelli, L.V. Artificial Intelligence: A New Diagnostic Software in Dentistry: A Preliminary Performance Diagnostic Study. Int. J. Environ. Res. Public Health 2022, 19, 1728. [Google Scholar] [CrossRef]
  41. Rattan, D. Panoramic Dental Xray Dataset, Kaggle. 2021. Available online: https://www.kaggle.com/datasets/daverattan/dental-xrary-tfrecords (accessed on 2 June 2024).
  42. Pushkara, A. Teeth_Dataset, Kaggle. 2020. Available online: https://www.kaggle.com/datasets/pushkar34/teeth-dataset (accessed on 2 June 2024).
  43. Hassani, H.; Amiri Andi, P.; Ghodsi, A.; Norouzi, K.; Komendantova, N.; Unger, S. Shaping the future of smart dentistry: From Artificial Intelligence (AI) to Intelligence Augmentation (IA). IoT 2021, 2, 510–523. [Google Scholar] [CrossRef]
Figure 1. Types of dental diseases, (a) including tooth decay, periodontal disease, dental abscess, and oral cancer; (b) including dental erosion, gingival hyperplasia, dental cysts, and oral ulcers.
Figure 1. Types of dental diseases, (a) including tooth decay, periodontal disease, dental abscess, and oral cancer; (b) including dental erosion, gingival hyperplasia, dental cysts, and oral ulcers.
Bdcc 08 00066 g001
Figure 2. Summary of the procedure for identifying dental diseases.
Figure 2. Summary of the procedure for identifying dental diseases.
Bdcc 08 00066 g002
Figure 3. Faster R-CNN structure.
Figure 3. Faster R-CNN structure.
Bdcc 08 00066 g003
Figure 4. Original ResNet-18 architecture.
Figure 4. Original ResNet-18 architecture.
Bdcc 08 00066 g004
Figure 5. Mixed model of Inception V4 + NASNetMobile.
Figure 5. Mixed model of Inception V4 + NASNetMobile.
Bdcc 08 00066 g005
Figure 6. Original YOLOv3 network architecture.
Figure 6. Original YOLOv3 network architecture.
Bdcc 08 00066 g006
Figure 7. Summary of each category [5,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40].
Figure 7. Summary of each category [5,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40].
Bdcc 08 00066 g007
Table 1. Comparative analysis of papers published in 2010–2023 on dentistry, where N/A means not mentioned.
Table 1. Comparative analysis of papers published in 2010–2023 on dentistry, where N/A means not mentioned.
Refs.Authors, (Year)AimClassifiersMeasurementDatasetSizePreprocessingFeature ExtractionResultStrengthWeakness
[25]Abdullah S. AL-Malaise
AL-Ghamdi et al.,
(2022)
To classify dental X-ray images into three categories: cavities, fillings, and implants. The data were divided into training and validation setsNASNetAccuracyUsing Kaggle, Panoramic Dental Xray Dataset [41]116Data augmentation by utilizing several operations, such as scaling, rotation, translation, Gaussian blur, and Gaussian noiseNASNet, AlexNet, CNNAfter they applied the data augmentation, the dataset became 245 and the accuracy of NASNet, AlexNet, and CNN increased.
Accuracy of the model was 96.51% with data augmentation and 93.36% without augmentation
They mentioned that the approach’s success was due to the restricted training sample.
High accuracy
They focused only on three categories
[7]Tran Manh Tuan, Hai V. Pham et al., (2017)To create a dental diagnosis tool that relies on fuzzy rule-based systems and machine learning approaches to assist dentists in making decisions based on X-ray pictures and patient symptomsFuzzy rulesAccuracyTaken from Hanoi Medical University56Entropy, edge-value, intensity, and local binary patterns—LBP, gradient feature, and patch-level feature to understand images such as texture, density, etc.Input variables such as entropy, edge, intensity (EEI), LBP, RGB,
gradient, and patch
The method, which included feature extraction, clustering, fuzzy rule creation, and fuzzy inference, outperformed the FKNN algorithm in terms of accuracy. The article also suggested future study and development directions. Accuracy 90.29%The FCM clustering technique aided in the grouping of comparable features. More accurate than FKNN. Depended on rulesDataset quality and quantity, referring to labels like low and high, rather than percentage
[17]Mircea Paul Muresan, Andrei Rzvan Barbura, Sergiu Nedevschi, (2020)To detect dental issues using panoramic radiography in order to acquire an accurate diagnosis and categorize the condition using X-ray picturesCNN, majority vote techniqueAccuracy, recall, precision, and F1-scoreUsed dataset from three different clinicsAbout 1000 The pictures were cropped to remove metadata, then the image was is increased to 2048 × 1024 pixels, and the image was assigned to one of 14 problem types\
CNN (ERFNet)
The proposed method performed well in classifying dental issues compared with 2 alternatives.
Accuracy of 89%
Focusing on 14 dental diseases, there was capacity to effectively segment teeth and diagnose dental disordersCould work to increase accuracy
[32]Ainas A. ALbahbah, Hazem M. El-Bakry, Sameh Abd-Elgahany, (2016)To analyze radiological images to diagnose dental decay, with the goal of being more effective than earlier studiesArtificial neural network (ANN) with a three-layer network structure and a sigmoid functionThe error percentage was used to assess performance.
Precision was used to define the level of measurement consistency.
ROC stands for receiver operating characteristic.
CE stands for cross-entropy
N/AN/AImage resizing.
The upper and lower jaws and teeth were extracted using segmentation.
Cropping was a technique used to extract the area of interest.
Pictures were converted to RGB before being thresholder to produce binary pictures.
Enhancement
Histogram of oriented gradient (HOG)Results were assessed using the error rate, accuracy, ROC curves, and cross entropy. The results showed that the neural network model was good at differentiating between healthy and decaying teethRule-based computer-assisted systems fell short of the performance of the suggested neural network-based approach.
Complex problems and variances in patients’ teeth could be handled
There was no information about the dataset and exact numerical results were not mentioned; also, the study was old
[27]Xiaojie Zhou,
Guoxia Yu,
Qiyue Yin, Yan Liu, Zhiling Zhang, and Jie Sun, (2022)
This study intended to advance and enhance the diagnosis of tooth decay in panoramic radiographs using CNN, especially in childrenCNN based on ResNetAccuracy, precision, AUC (area under the curve), and recallBeijing Children’s Hospital, Capital Medical University, and the National Center for Children’s Health all contributed210Panoramic radiographs were used to extract individual teeth, which were subsequently dissected out using special instruments.
The teeth’s sizes and forms varied as inputs. The database was purged of teeth that were absent from the image.
Training, validation, and testing were the three categories into which the database was separated
CNN modelThe context-aware CNN model performed better than the typical CNN baseline in terms of accuracy, precision, recall, F1-score, and AUC.
The percentage points obtained varied from 5 to 7.
The model reduced diagnostic time, although doctors remained superior in specific instances
The model reduced diagnostic time N/A
[9]Gil Jader, Jefferson Fontinele, Marco, et al.,
(2018)
It intended to employ deep learning (Mask R-CNN) to separate teeth from panoramic X-ray picturesR-CNN,
R-CNN, and FCN
Evaluation metrics, like recall, accuracy, precision, F1-score, and specificityTaken from MSCOCO dataset1500N/AMask R-CNN learned features from dataThe segmentation system based on the R-CNN mask produced promising results with high accuracy, outperforming unsupervised segmentation approachesThe power of R-CNN on image segmentation.
Diversity of image display in addition to excellent quality
There was no publication date for this article.
Ambiguity in data pre-processing procedure
[16]Minyoung Chung,
Jusang Lee, Sanguk Park, et al., (2020)
The goal was to create a CAD system for dental panoramic X-ray pictures that would help professionals to locate and identify individual teethCNN-based, ResNet, DLA, and stacked hourglass networksAverage precision (AP), AP50, AP75 measurements, and (mIoU), are used to assess tooth detection accuracyN/A818Used CLAHE for standardized contrastConvolutional neural networks (CNNs)The approach achieved accuracy in tooth detection and could thus be used in clinical situationsCenter shifts multi-task training improved translation accuracy.
The system worked efficiently without the need for other filtration methods
Accuracy percentage was not mentioned
[30]Pedro H. M. Lira and Gilson A. Giraldi,
(2010)
Creating an automated segmentation approach for extracting dental X-ray imagesDoes not explicitly mention, but it could be PCA and image-processing techniquesCrown-body (CB), root (R) lengths, and the CB/R ratioN/AN/AQuadtree decomposition, low-pass filtering, Otsu’s thresholding, and XOR operationsShape models, PCA, and feature vector computing were used for tooth border identificationThe experimental findings were discussed, and the procedure was applied to the teeth to eliminate interference using X-ray picturesEliminated the necessity for jaw separation for tooth segmentation and included a mechanism for identifying tooth bordersOverlapping teeth in X-ray images. Accuracy and the dataset were not mentioned
[33]Jufriadif Na`am, Johan Harlan, Sarifuddin Madenda, Eri Prasetyo Wibowo, (2017)The study’s purpose was to improve image processing quality for detecting proximal caries in panoramic dental X-ray imagesN/AIdentifying proximal caries and their severityN/AN/ARemoving undesirable regions from X-ray picturesMultiple morphological gradient (mMG)Displaying the results using mMG technology, which helped to clarify the edges to determine where the caries wasUsing technology improved the accuracy of identifying caries, which is difficult to spot with just your eyesMissing a lot of data, such as performance in relation to the method of determining caries, the classifiers used, and from where the images were used
[38]Alexey N. Subbotin,
(2021)
The goal of this article was to use panoramic dental imaging, machine learning, fog computing environments, and cloud technologies to enhance the accuracy of dental diagnoses and increase the pace of dentists’ workThe use of machine learning for diagnosing tooth damage is mentioned in the article. However, it does not clarify which classifiers were utilizedMeasuring the reduction.
Increased processing speed for X-ray pictures
N/AN/Apanoramic dental Pimage acquisition.
Use of fog computing environments
N/AAccording to the report, the number of patients sent for follow-up therapy decreased by 7.24%, while X-ray image processing speed increased by 13.93%The system’s concept was important since it tried to enhance diagnostic time by utilizing fog computing methods, machine learning, and cloud technologiesLacked a lot of information and particular specifics regarding the machine learning system’s classifiers, dataset size, and feature extraction methods
[8]Jae-Hong Lee et al., (2018)This study sought to determine the usefulness of deep CNN algorithms for detecting and diagnosing dental caries on periapical radiographsPre-trained GoogLeNet Inception v3 CNN network was used for
preprocessing, and the datasets were trained using transfer learning
Accuracy From dental hospital’s
PACS system (Infinitt PACS, invented by Infinitt Co., Seoul, Korea)
3000Resized to 299 × 299 pixels and converted into JPEG file format.
All maxillary teeth images were reverted to the mandibular teeth form
through a vertical flip
There were 22 deep strata. We employed nine inception modules, including an auxiliary classifier, two fully linked layers, and softmaxThe diagnosis accuracy was 88.0% for molars, 89.0% for premolars, and 82.0% for combined premolars and molarsUtilization of deep CNN algorithm. Robust evaluation metricsThe dataset included permanent teeth only.
Resolution of images. Exclusion of clinical parameters
[21]Chisako Muramatsu et al.,
(2021)
The goal of this research was to create a computerized method for detecting and classifying teeth in dental panoramic radiographs, allowing for automatic structured creation of dental charts. It could also serve as a preprocessing step for computerized image analysis of dental problemsCNN-based methodSensitivity and false positivesCollected at Asahi University Hospital 100 This preprocessing approach used the segmentation result of the lower mandible contour to identify the approximate location of the teeth. By restricting the size of the supplied pictures, the exclusion of the additional dental region reduced the number of false positive detectionsFeatures were extracted from each input picture using convolution and residual layers before merging themThe tooth detection sensitivity was 96.4%, with 0.5 false positives per instance. The categorization accuracy for tooth kinds and conditions was 93.2% and 98.0%, respectivelyThe suggested technology may automatically analyze dental charts for forensic identification and pre-screening for dental diseases1, Small data set. 2, They did not include third
molars in this study
[18]Sonavane, A. et al.,
(2021)
In this study, we concentrated on identifying cavitiesCNN-based methodAccuracyKaggle dataset [42]74The dataset contained images in the JPEG format. We utilized the ImageDat-aGenerator from keras.preprocessing.image in Python. Validation was performed using 20% of the training images and random horizontal flips. We utilized a random zoom range of 0.2 for our modelN/AThe maximum accuracy is was.43%. Increasing the dataset size improved model accuracyIntroduced a promising mobile application for users to capture dental images, offering rapid assessmentSmall data set
[14]Yaqi Wang, et al.,
(2020)
To develop an automatic dental X-ray detection method that utilized adaptive histogram equalization and a hybrid multi-convolution neural network (CNN)Hybrid multi-CNN model to classify six different locations of dental slices Accuracy, specificity, and the area under curve (AUC)Source of dataset is not mentioned2491Three preprocessing strategies were used in this study to improve the precision of dental X-ray detection. These consisted of adaptive histogram equalization, median filtering, and image sharpeningThe feature extraction process involved using two different networks, NASNetMobile and Inception V4, to generate eigenvectors for each image The test set’s accuracy was over 90%, and the findings showed that the suggested strategy produced good accuracy, with an AUC of 0.97. The algorithm’s performance was also contrasted with the comments offered by four dentists, demonstrating how well it located teethTo increase the accuracy of dental X-ray detection, the suggested method combined preprocessing techniques and deep learning models.
The work addressed issues with image rotation and tooth position intersection.
The effectiveness of the procedure was fully evaluated by the evaluation criteria utilized (accuracy, specificity, and AUC)
Good accuracy, it would be helpful to provide further information on any potential restrictions or difficulties encountered during the investigation.
The particular neural network architectural specifications, such as the number of layers and activation functions, were not published, which could be crucial for replication and further study
[22]ANDAC IMAK, et al.,
(2022)
To present a unique approach for the automatic detection of dental caries using periapical pictures. The researchers wanted to solve the limits of dentists’ manual diagnosisMI-DCNNE Accuracy, sensitivity, specificity, precision, and F1-score. These metrics provide a comprehensive assessment of the model’s performance in dental caries detectionThese images were obtained from private oral and dental health clinics using periapical radiography devices340During the preprocessing stage, image processing techniques were used to improve the raw periapical pictures. Using a sharpening filter and altering intensity settings to increase contrast and highlight problematic regions are examples of thisCNN It was seen that the proposed model was quite successful in the diagnosis of dental caries. The reported
accuracy score was 99.13%. This result showed that the proposed MI-DCNNE model could effectively contribute
to the classification of dental caries
1, Good accuracy.
2, The algorithms enabled autonomous feature extraction, decreasing the need for manual feature engineering.
3, The incorporation of a score-based ensemble technique improved the model’s robustness
1, The paper did not go into detail on the architecture of the deep CNN.
2, The study’s dataset may have special characteristics that limit the model’s generalizability to other demographics or therapeutic contexts
[12]V. Geetha and K. S. Aprameya,
(2019)
This work employed machine learning to diagnose dental caries in radiographsKNNAccuracy, precision, false positive rate, receiver operating characteristic (ROC)The photos were obtained at SJM Dental College in India utilizing a Gendex X-ray equipment equipped with a Sirona RVG sensor49The dental X-ray images were converted to bmp format
using MATLAB conversion tool application. After the
conversion, resized to 256 x 256 of class double. The
resultant image was enhanced using the Laplacian filter
GLCM technique. The extracted features included contrast,
correlation, energy, homogeneity, mean and entropy
The algorithm achieved 98.5% accuracy, 98.5% precision, 4.7% false positive rate, and a ROC curve area of 0.953 using 10-fold cross validation. The results were validated using two-way ANOVA at a significance level of 5%The proposed method’s key advantage was its ease of implementation, rapid computation scheme, and ease of operationBroader validation with a larger and more diverse dataset would enhance the reliability and applicability of the proposed method
[24]Yassir Edrees Almalki et al.,
(2022)
This study proposed using the YOLOv3 deep learning model for automated dental problem detectionYOLOv3Mean average precision (mAP),
F1-score,
precision,
sensitivity, and intersection over union (IOU)
Some OPGs
taken with a DSLR camera, whereas others obtained from clinics
1200Rotation range
Zoom range
Shear range
Horizontal flip
CNN-basedThe trained
model YOLOv3 was tested on test images after training and achieved an accuracy of 99.33%
The paper presented a novel way to detecting dental problems using deep learningThe method’s limitation was the small sample size of only four types of illnesses, which did not represent the entire population
[19]Yu-Ping Huang and Shyh-Yuan Lee, (2022)To avoid this issue and emphasize the significance of high-quality data, two-phase research was conducted to evaluate several approaches for detecting caries. Initially, five experienced doctors compared caries detection based on OCT and apical radiographyCNNAccuracy, specificity, sensitivity (PPV), and (NPV)From National Yang-Ming University100 Self-developed OCT, periapical films, and Micro-CT for dental assessmentConvolutions with tiny kernels are typically used to extract local features like edges, impulses, and noise in imagesThe accuracy was 95.21%. The sensitivity was 98.85%, specificity was 89.83%, and PPV and NPV were 93.48% and 98.15%, respectivelyWhile this study has limitations, it might nonetheless serve as a platform for additional research in related topicsThe study’s main shortcoming was the manual verification process
[34]André Ferreira Leite, Karla de Faria Vasconcelos, Holger Willems, and Reinhilde Jacobs, (2020)To present an overview of artificial intelligence (AI) in the field of dental and maxillofacial radiology (DMFR), as well as to address possible uses, difficulties, and future viewsConvolutional neural networks (CNNs)Accuracy, specificity, sensitivity, and area under curve (AUC)Various sources, including public datasets and private collectionsVaris, with 2400Data augmentation techniques, such as cropping, adding noise, mirroring, and others to increase the dataset sizeDeep learning models, particularly CNNs, automatically extracted features from the input dataAI approaches, particularly deep learning, demonstrated encouraging results in a variety of dental imaging tasks, with some cases obtaining accuracies comparable to expert-level dentistsThe study presents a complete review of AI in DMFR currently, highlighting both its potential benefits and limitations. It emphasizes AI’s transformational potential in disease diagnosis, treatment planning, and predictionThere are issues related to the mathematical processes behind AI that might impede radiologists’ interpretation of outcomes. The availability of huge data on dental pictures and the construction of a ground truth for validating AI outcomes
[29]Junhua Zhu, Zhi Chen, Jing Zhao, et al.,
(2023)
The purpose was to create an artificial intelligence (AI) framework for diagnosing various dental illnesses using panoramic radiographs (PRs) using deep convolutional neural networksCNNs, specifically two models: BDU-Net and nnU-NetDiagnosing multiple dental diseases on panoramic radiographs with respect to sensitivity, specificity, (AUC), and diagnostic timeStomatology Hospital
of Zhejiang Chinese Medical University
1996 Image resampling, image normalization, image spacing, and patch size settingCNNs: BDU-Net and nnU-NetThe sensitivity, specificity, and AUC are specified respectively:
For impacted teeth, 0.964, 0.996, 0.960, and 0.980.
For full crowns, 0.953, 0.998, 0.951, and 0.975.
For residual roots, 0.871, 0.999, 0.870, and 0.935.
For missing teeth, 0.885, 0.994, 0.879, and 0.939
For caries, 0.554, 0.990, 0.544, and 0.772
The study compared the efficacy of an AI framework for detecting numerous dental disorders on PRs to that of dentists with varied levels of experience. Across several disorders, the framework displayed great specificityN/A
[12]Dmitry V. Tuzoff, Lyudmila N. Tuzova, Michael M. Bornstein, Alexey. et al., (2019)To analyze dental radiographs and provide a solution based on convolutional neural networks (CNNs) to recognize and count teeth in panoramic radiographsFaster R-CNN and VGG-16 CNN for teeth numberingSensitivity and precision for teeth detection; sensitivity and specificityPanoramic radiographs of adults1574 N/ACNN-based feature extraction from the radiographsThe system had a sensitivity of 0.9941 and a precision of 0.9945 for tooth detection. The sensitivity and specificity of tooth numbering were 0.9800 and 0.9994, respectivelyThe research described a unique technique for a practical dentistry application that used cutting-edge CNN architectures to achieve results comparable to expert levelsThe preprocessing procedures were not explained in the snippets supplied, which might be critical for understanding the whole technique and repeating the results
[23]J. Kühnisch, O. Meyer, et al., (2022)To create a deep learning system for identifying caries from intraoral pictures using CNNs, and to compare their diagnostic performance with expert standardsConvolutional neural networks (CNNs)Sensitivity, specificity, and area under the receiver operating characteristic (ROC) curve (AUC)Anonymized photographs from permanent teeth2417 Image augmentation, transfer learning, normalization to compensate for under- and overexposureUtilized MobileNet2 architecture for the CNN, which used inverted residual blocks.When all test photos were reviewed, the CNN correctly identified cavities in 92.5% of casesThe study used AI to diagnose cavities and reached a high accuracy rate, suggesting the promise of AI in dental diagnosticsThe paper admitted that the present strategy could be improved further, implying that there may be limits in the methodology or the AI model utilized
[31]Xiaoqiu Xiea, Lin Wangb, Aming Wangc, (2010)To construct a decision-making expert system for orthodontic treatment using artificial neural networks (ANNs) to decide if extractions are necessary for patients aged 11 to 15Artificial neural networks (ANNs) with back-propagation (BP) modelAnterior teeth uncovered by incompetent lips, IMPA (L1-MP), and FMA (FH-MP), and the accuracy for the constructed artificial neural network was 80%Data from patients aged 11–15 who visited their department from 1999 to 2005200Encoding and normalization of non-quantification indices and quantification indices23 indices, including factors
like cast measurements, cephalometry, and growth
The created ANN was 80% accurate in identifying therapy for malocclusion children aged 11–15Comprehensive technique employing ANN to anticipate the need for extractions, perhaps providing orthodontists with a beneficial toolThis study is limited to a specific age group (11–15 years old) and may not be generalizable to other age groups or orthodontic problems
[11]Hu Chen, Kailai Zhang, Peijun Lyu, Hong Li, Ludan Zhang, Ji Wu, Chin-Hui Lee, (2019) To create a deep learning-based system for automatically detecting and counting teeth in dental X-ray images, thereby increasing the efficiency and accuracy of dental diagnosticsA deep convolutional neural network (CNN) for classification, specifically the Faster R-CNN modelPrecision, recall, and intersection-over-union (IOU)Dental periapical films collected from Peking University School and Hospital of Stomatology1250 The input photos were scaled to keep their original aspect ratio, with a minor dimension of 300 pixelsDeep learning using Faster R-CNN for object detection and feature extraction from dental X-ray imagesThe results demonstrated that precisions and recalls approached 90%, with an average IOU value of 91% between designated boxes and ground facts. Machines are on par with younger dentistsDeep learning techniques, particularly the faster R-CNN, were used effectively in this research to obtain excellent precision and recall rates. The findings were equivalent to a junior dentist’s performance, demonstrating the viability of the suggested technology in practical applicationsWhile the results were remarkable, the work may benefit from testing on a larger dataset or including more real-world events. Furthermore, the dependence on a single deep learning model (faster R-CNN) may limit its future adaptation to newer or other architectures
[15]Shankeeth Vinayahalingam, Tong Xi, et al.,
(2019)
Creating an automated system that uses deep learning to detect and segment the M3 and IAN on dental panoramic radiographs (OPG)CNN based on U-net architectureDice coefficient, sensitivity, and specificityOrthopantomograms (OPGs) of patients from the Department of Oral and Maxillofacial Surgery of Radboud University Nijmegen Medical Centre81 OPGsImage acquisition, data pre-processing, standardizing size, and applying contrast enhancement to the OPGN/AMean dice coefficients for M3s and IAN were 0.947 ± 0.033 and 0.847 ± 0.099, respectivelyDeep learning is used effectively in this work to address a key clinical obstacle, yielding encouraging results in the automated identification and segmentation of third molars and IANThe research did not go into detail on feature extraction; however, issues such as lack of contrast on OPGs and heterogeneity in mandibular canal shape influenced segmentation performance
[26]Kuo Feng Hung, Qi Yong H. Ai, Ann D. King, Michael M. Bornstein, Lun M. Wong, and Yiu Yan Leung,
Clinical Oral Investigations volume, (2022)
To develop and evaluate a CNN method for the automated detection and segmentation of MT and MRCs in the maxillary sinus using low-dose and full-dose cone-beam CT (CBCT)Detection classifier, segmentation classifierAreas under the curves (AUCs) and Dice similarity coefficient (DSC)Maxillary sinuses890 Image normalization, noise reduction, image registration, data augmentationCNN algorithm was constructed using V-Net and supports vector regressionThe findings reveal that the proposed CNN algorithm was very accurate in identifying and segmenting MT and MRCs in the maxillary sinus on both low-dose and full-dose CBCT images. The algorithm’s performance did not vary considerably between the two imaging techniquesHigh accuracy.
Comparable performance.
Segmentation capability
Manual segmentation.
Limited sample size.
Lack of external validation
[35]Álvaro López-Janeiro, Clara Cabañuz, Luis Blasco-Santana, Elena Ruiz-Bravo, (2021)To develop a machine learning algorithm for the diagnosis of malignant salivary gland tumors. The researchers aimed to improve the diagnostic performance in this challenging field of pathology by applying machine learning techniquesRecursive partitioning algorithmMorphological variablesCommonly encountered malignant salivary gland tumors115 casesData cleaning
Feature scaling
Feature selection or dimensionality reduction
Data splitting
Cross-validation
Assessing and quantifying specific morphological characteristics associated with malignant salivary gland tumorsThe machine learning algorithm successfully guided the morphological approach to diagnosing malignant salivary gland tumors. It achieved high classification accuracy, identified relevant morphological variables, and demonstrated consistent misclassification patterns. The algorithm shows promise as a diagnostic tool to improve the accuracy of diagnosing these challenging tumorsImproved diagnostic performance, consistent misclassification pattern, relevant morphological variables, and inter-observer concordanceLimited sample size, lack of external validation, limited histologic types, interpretability of the classification tree
[36]Jonas Almeida Rodreigues, Joachim Krois, Falk Schwendicke, (2021)To develop AI applications that can assist dentists in tasks such as image classification and object detection The passage mentioned the use of neural networks (NNs), specifically multilayered NNs known as “deep learning”, and CNNs for processing complex imagery dataCapturing statistical patterns and structures from data through machine learning, particularly in the application of neural networksN/AN/AData cleaning, feature scaling, feature selection, and data splittingCNNGeneralizable and robust AI applications have the potential to be beneficial for both clinicians and patients in the future, without providing specific evidence or data to support this claim.Potential for computer systems to perform complex tasks, powerful machine learning tools for processing imagery data, and the ability to extract valuable features for various applicationsLack of robustness and generalizability in many studies, limited interaction between dental and technical disciplines, and the need for user understanding and critical evaluation of AI applications
[20]Elisa Diniz de Lima, José Alberto Souza Paulino, et al., (2021)To evaluate three machine learning (ML) attribute extraction methods: radiomic, semantic, and radiomic-semantic association in temporomandibular disorder (TMD) detection using infrared thermography (IT); and to determine whether ML classifier, KNN, SVM, or MLP is more efficient for this objectiveKNN, SVM, and MLP Using infrared thermography (IT) pictures, we evaluated the effectiveness of three machine learning attribute extraction methods (radiomic, semantic, and radiomic–semantic association) for identifying temporal mandibular dysfunction (TMD). The data were analyzed using Hopkin’s statistic, Shapiro-Wilk, ANOVA, and Tukey testsPatients78 Patient selection
Data acquisition
Region of interest (ROI) selection
Attribute-extraction methods
Radiomic attribute extraction.
Semantic feature extraction.
Radiomic-semantic association
There was a significant difference between training and testing accuracy levels for the ra-diomic–semantic association (p = 0.003). MLP differs from the other classifiers in terms of radiomic–semantic association (p = 0.004). The accuracy, precision, and sensitivity of semantic and radiomic–semantic association differed significantly from radiomic characteristics (p = 0.008, p = 0.016, and p = 0.013, respectively)Multimodal approach.
Real-world application.
Comparative analysis.
Statistical analysis
Limited sample size and not mention the clinic.
Lack of generalizability.
Lack of information on preprocessing.
Lack of external validation
[15]Wenzhe You, Aimin Hao, Shuai Li, Yong Wang, and Bin Xia, (2020)To create and evaluate a deep learning (AI) model for detecting dental plaque on primary teeth. The study aimed to evaluate the AI model’s diagnosis accuracy and compare it with the performance of an experienced pediatric dentistThe classification task was performed using a deep learning-based AI model designed using a CNN frameworkMean intersection-over-union (MIoU)Intraoral photos of primary teeth (training dataset, validation dataset, digital camera photos, lower-resolution photos)886 Image resizing
Image normalization
Data augmentation
Labeling
CNN was responsible for automatically extracting relevant features from the input imagesThe deep learning-based AI model showed clinically acceptable performance in detecting dental plaque on main teeth, highlighting its potential to help improve juvenile oral healthImproved diagnostic accuracy.
Consistency.
Comparable performance
Limited information on CNN architecture.
Limited sample size.
Lack of external validation.
Lack of comparative metrics.
Limited scope
[37]Achsha Babu, J. Andrew Onesimu, and K. Martin Sagayam, (2021)To present the current applications of Artificial Intelligence (AI)N/AN/AN/AN/AN/AN/AN/AHighlights the potential implications of AI in dentistry.
Provides an overview of different AI techniques in dentistry.
Analyzes state-of-the-art literature and presents a comparative analysis.
Discusses research challenges and future directions in the field
Lacks specific examples of AI applications in dentistry.
Limited details on methodology and findings.
Requires access to the complete paper for a comprehensive understanding
[28]Kubilay Muhammed Sunnetci, Sezer Ulukaya, Ahmet Alkan, (2022)To develop a hybrid artificial intelligence-based system for diagnosing periodontal bone loss in dental images AlexNet-based deep image feature categorization methods, including coarse tree, weighted K-nearest neighbor (KNN), Gaussian Naive Bayes, RUSBoosted trees ensemble, and linear support vector machine (SVM).
Classifying SqueezeNet-based deep image features using me-dium tree, Gaussian Naïve Bayes, boosted trees ensemble, coarse KNN, and medium gaussian SVM
Accuracy: 81.49%
Error: 18.51%
Sensitivity: 84.57%
Specificity: 79.14%
Precision: 75.68%
F1 score: 79.88%
Dental images labeled by an expertA total of 1432 Deep learning architectures, such as AlexNet, SqueezeNet, and EfficientNetB5 for feature extraction and classificationAlexNet and SqueezeNetLinear SVM performed best for AlexNet-based features, and medium Gaussian SVM achieved the best results for SqueezeNet-based features. The performance metrics for the best classifier (linear SVM) with AlexNet-based features were:
Accuracy: 81.49%, Sensitivity: 84.57%, Specificity: 79.14%, Precision: 75.68%, F1 score: 79.88%
Using artificial intelligence-based systems for diagnosing periodontal bone loss, aiding accurate and early diagnosis of dental disorders.
Deep learning architectures (AlexNet, SqueezeNet, and EfficientNetB5) were employed for feature extraction and classification.
Multiple classifiers were evaluated for each architecture, enhancing the robustness of the study.
User-friendly
Specific information regarding the retrieved deep image features and the classifier for Effi-cientNetB5 was not provided.The evaluation methodology and possible constraints have not been specified. The information is incomplete, limiting a comprehensive assessment
[39]Patil et al., (2022) To explore the applications and pitfalls of AI in diagnosing oral diseases using clinical data and diagnostic images Various AI models (e.g., CNNs) Diagnostic accuracy, reduction in costs, and minimization of human errors Multiple sources, clinical data, and diagnostic images N/A N/A N/A AI shows immense potential in improving diagnostic accuracy, reducing costs, and minimizing human errorsComprehensive review highlighting various applications of AI in dentistry Needs larger datasets and better integration of AI into routine clinical practice
[40]De Angelis et al., (2022)Evaluating the performance of AI diagnostic software in analyzing panoramic X-raysApoxSensitivity, specificity, and diagnostic accuracyPanoramic X-raysN/A N/A N/A High sensitivity (0.89) and specificity (0.98) in identifying dental structuresReliable in dental diagnosticsChallenges in detecting radiolucent materials like fillings and residual roots
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Musleh, D.; Almossaeed, H.; Balhareth, F.; Alqahtani, G.; Alobaidan, N.; Altalag, J.; Aldossary, M.I. Advancing Dental Diagnostics: A Review of Artificial Intelligence Applications and Challenges in Dentistry. Big Data Cogn. Comput. 2024, 8, 66. https://doi.org/10.3390/bdcc8060066

AMA Style

Musleh D, Almossaeed H, Balhareth F, Alqahtani G, Alobaidan N, Altalag J, Aldossary MI. Advancing Dental Diagnostics: A Review of Artificial Intelligence Applications and Challenges in Dentistry. Big Data and Cognitive Computing. 2024; 8(6):66. https://doi.org/10.3390/bdcc8060066

Chicago/Turabian Style

Musleh, Dhiaa, Haya Almossaeed, Fay Balhareth, Ghadah Alqahtani, Norah Alobaidan, Jana Altalag, and May Issa Aldossary. 2024. "Advancing Dental Diagnostics: A Review of Artificial Intelligence Applications and Challenges in Dentistry" Big Data and Cognitive Computing 8, no. 6: 66. https://doi.org/10.3390/bdcc8060066

Article Metrics

Back to TopTop