Automated description of the mandible shape by deep learning
Loading...
Identifiers
Publication date
Advisors
Tutors
Editors
Journal Title
Journal ISSN
Volume Title
Publisher
Springer
Abstract
Purpose: The shape of the mandible has been analyzed in a variety of fields, whether to diagnose conditions like osteoporosis or osteomyelitis, in forensics, to estimate biological information such as age, gender, and race or in orthognathic surgery. Although the methods employed produce encouraging results, most rely on the dry bone analyses or complex imaging techniques that, ultimately, hamper sample collection and, as a consequence, the development of large-scale studies. Thus, we proposed an objective, repeatable, and fully automatic approach to provide a quantitative description of the mandible in orthopantomographies (OPGs).
Methods: We proposed the use of a deep convolutional neural network (CNN) to localize a set of landmarks of the mandible contour automatically from OPGs. Furthermore, we detailed four different descriptors for the mandible shape to be used for a variety of purposes. This includes a set of linear distances and angles calculated from eight anatomical landmarks of the mandible, the centroid size, the shape variations from the mean shape, and a group of shape parameters extracted with a point distribution model.
Results: The fully automatic digitization of the mandible contour was very accurate, with a mean point to the curve error of 0.21 mm and a standard deviation comparable to that of a trained expert. The combination of the CNN and the four shape descriptors was validated in the well-known problems of forensic sex and age estimation, obtaining 87.8% of accuracy and a mean absolute error of 1.57 years, respectively
Description
Bibliographic citation
International Journal of Computer Assisted Radiology and Surgery 16, 2215–2224 (2021). https://doi.org/10.1007/s11548-021-02474-2
Relation
Has part
Has version
Is based on
Is part of
Is referenced by
Is version of
Requires
Publisher version
https://doi.org/10.1007/s11548-021-02474-2Sponsors
Open Access funding provided thanks to the CRUE-CSIC agreement with Springer Nature. This work has received financial support from Consellería de Cultura, Educación e Ordenación Universitaria (accreditation 2019-2022 ED431G-2019/04 and Group with Growth Potential ED431B 2020-2022 GPC2020/27) and the European Regional Development Fund (ERDF), which acknowledges the CiTIUS-Research Center in Intelligent Technologies of the University of Santiago de Compostela as a Research Center of the Galician University System
Rights
© The Author(s) 2021. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/
Atribución 4.0 Internacional
Atribución 4.0 Internacional








