Abstract
Classifying large dental radiographic datasets enables efficient data management and retrieval, facilitating quick access to specific types of radiographs for clinical or research purposes. It also supports advanced analytics, research, and the development of Artificial Intelligence (AI) tools. This study aimed to develop an automated workflow to improve the efficiency of dental radiograph classification. The workflow covers the entire process, from retrieving Digital Imaging and Communication in Medicine (DICOM) files to converting them into Joint Photographic Experts Group (JPEG) format and classifying them using Convolutional Neural Networks (CNNs) on a large dataset.
This cross-sectional machine learning study was conducted using 48,329 dental radiographs to develop an automated classification workflow using CNNs. The workflow involved retrieving 48,329 DICOM files, standardizing them to a uniform size, and converting them to JPEG using the Pydicom library. Image preprocessing, including normalization, prepared the images for machine learning analysis. Various models, such as ResNet-50, AlexNet, and custom CNN models, were trained, validated, and tested on distinct datasets.
These models were then deployed to classify a dataset of 48329 images. AlexNet demonstrated the highest performance, with a 95.98% detection rate and no errors, while ResNet-50 achieved 92.3% accuracy with 194 errors, and the custom CNN model showed a 77.25% detection rate with 1,623 errors.
The study established an effective automated workflow for dental radiograph classification, demonstrating that CNN models significantly improve classification accuracy and efficiency.
Keywords: computer-assisted, convolutional neural networks, machine learning, radiographic image interpretation, radiography dental.
Journal: Cureus
