MORPHIX: Resolving sample identification bias in morphometrics analysis with a supervised machine learning package

Curation statements for this article:
  • Curated by eLife

    eLife logo

    eLife assessment

    The authors present a critique of current usage of principal component analysis in geometric morphometrics, making a convincing case with benchmark data that standard techniques perform poorly. The work is an important contribution to the field and will hopefully lead to a reassessment of the methodology most scientists in morphometrics currently use. The authors also present a new Python package that uses machine learning to provide better-supported results than principal component analysis. While the package is in the title of the manuscript, it is not its main focus.

This article has been Reviewed by the following groups

Read the full article See related articles

Abstract

Evolutionary biologists, primarily anatomists and ontogenists, employ modern geometric morphometrics to quantitatively analyse physical forms (e.g., skull morphology) and explore relationships, variations, and differences between samples and taxa using landmark coordinates. The standard approach comprises two steps, Generalised Procrustes Analysis (GPA) followed by Principal Component Analysis (PCA). PCA projects the superimposed data produced by GPA onto a set of uncorrelated variables, which can be visualised on scatterplots and used to draw phenetic, evolutionary, and ontogenetic conclusions. Recently, the use of PCA in genetic studies has been challenged. Due to PCA’s central role in morphometrics, we sought to evaluate the standard approach and claims based on PCA outcomes. To test PCA’s accuracy, robustness, and reproducibility using benchmark data of the crania of five papionin genera, we developed MORPHIX, a Python package containing the necessary tools for processing superimposed landmark data with classifier and outlier detection methods, which can be further visualised using various plots. We discuss the case of Homo Nesher Ramla , an archaic human with a questionable taxonomy. We found that PCA outcomes are artefacts of the input data and are neither reliable, robust, nor reproducible as field members may assume and that supervised machine learning classifiers are more accurate both for classification and detecting new taxa. Our findings raise concerns about PCA-based findings in 18,000 to 32,900 studies. Our work can be used to evaluate prior and novel claims concerning the origins and relatedness of inter- and intra-species and improve phylogenetic and taxonomic reconstructions.

Article activity feed

  1. eLife assessment

    The authors present a critique of current usage of principal component analysis in geometric morphometrics, making a convincing case with benchmark data that standard techniques perform poorly. The work is an important contribution to the field and will hopefully lead to a reassessment of the methodology most scientists in morphometrics currently use. The authors also present a new Python package that uses machine learning to provide better-supported results than principal component analysis. While the package is in the title of the manuscript, it is not its main focus.

  2. Reviewer #1 (Public Review):

    Mohseni and Elhaik's article offers a critical evaluation of Geometric Morphometrics (GM), a common tool in physical anthropology for studying morphological differences and making phylogenetic inferences. I read their article with great interest, although I am not a geneticist or an expert on PCA theory since the problem of morphology-based classification is at the core of paleoanthropology.

    The authors developed a Python package for processing superimposed landmark data with classifier and outlier detection methods, to evaluate the adequacy of the standard approach to shape analysis via modern GM. They call into question the accuracy, robustness, and reproducibility of GM, and demonstrate how PCA introduces statistical artefacts specific to the data, thus challenging its scientific rigor. The authors demonstrate the superiority of machine learning methods in classification and outlier detection tasks. The paper is well-written and provides strong evidence in support of the authors' argument. Thus, in my opinion, it constitutes a major contribution to the field of physical anthropology, as it provides a critical and necessary evaluation of what has become a basic tool for studying morphology, and of the assumptions allowing its application for phylogenetic inferences. Again, I am not an expert in these statistical methods, nor a geneticist, but the authors' contribution is of substantial relevance to our field (physical anthropology). The examples of NR fossils and HLD 6 are cases in point, in line with other notable examples of critical assessment of phylogenetic inferences made on the basis of PCA results of GM analysis. For example, see Lordkipanidze et al.'s (2014) GM analyses of the Dmanisi fossils, suggesting that the five crania represent a single regional variant of Homo erectus; and see Schwartz et al.'s (2014) comment on their findings, claiming that the dental, mandibular, and cranial morphology of these fossils suggest taxic diversity. Schwartz et al. (2014) ask, "Why did the GMA of 78 landmarks not capture the visually obvious differences between the Dmanisi crania and specimens commonly subsumed H. erectus? ... one wonders how phylogenetically reliable a method can be that does not reflect even easily visible gross morphological differences" (p. 360).

    As an alternative to the PCA step in GM, the authors tested eight leading supervised learning classifiers and outlier detection methods on three-dimensional datasets. The authors demonstrated inconsistency of PCA clustering with the taxonomy of the species investigated for the reconstruction of their phylogeny, by analyzing a database comprising landmarks of 6 known species that belong to the Old World monkeys tribe Papionini, using PCA for classification. The authors also demonstrated that high explained variance should not be used as an estimate of high accuracy (reliability). Then, the authors altered the dataset in several ways to simulate the characteristic nature of paleontological data.

    The authors excluded taxa from the database to study how PCA and alternative classifiers are affected by partial sampling, and the results presented in Figures 4 and 5, among others, are quite remarkable in showing the deviations from the benchmark data. These results expose the perils of applying PCA and GM for interpreting morphological data. Furthermore, they provide evidence showing that the alternative classifiers are superior to PCA, and that they are less susceptible to experimenter intervention. Similar results, i.e., inconsistencies in the PC plots, were obtained in examinations of the effect of removing specimens from the dataset and in the interesting test of removing landmarks to simulate partial morphological data, as is often the case with fossils. To test the combined effect of these data alterations, the authors combined removal of taxa, specific samples, and landmarks from the dataset. In this case, as well, the PCA results indicate deviation from the benchmark data. However, the ML classifiers could not remedy the situation. The authors discuss how these inconsistencies may lead to different interpretations of the data, and in turn, different phylogenetic conclusions. Lastly, the authors simulated the situation of a specimen of unknown taxonomy using outlier detection methods, demonstrating LOF's ability to identify a novelty in the morphospace.

    References
    Bookstein FL. 1991. Morphometric tools for landmark data: geometry and biology [Orange book]. Cambridge New York: Cambridge University Press.
    Cooke SB, and Terhune CE. 2015. Form, function, and geometric morphometrics. The Anatomical Records 298:5-28.
    Lordkipanidze D, et al. 2013. A complete skull from Dmanisi, Georgia, and the evolutionary biology of early Homo. Science 342: 326-331.
    Schwartz JH, Tattersall I, and Chi Z. 2014. Comment on "A complete skull from Dmanisi, Georgia, and the evolutionary biology of Early Homo". Science 344(6182): 360-a.

  3. Reviewer #2 (Public Review):

    I completely agree with the basic thrust of this study. Yes, of course, machine learning is FAR better than any variant of PCA for the paleosciences. I agree with the authors' critique early on that this point is not new per se - it is familiar to most of the founders of the field of GMM, including this reviewer. A crucial aspect is the dependence of ALL of GMM, PCA or otherwise, on the completely unexamined, unformalized praxis by which a landmark configuration is designed in the first place. I must admit that I am stunned by the authors' estimate of over 32K papers that have used PCA with GMM.

    But beating a dead horse is not a good way of designing a motor vehicle. I think the manuscript needs to begin with a higher-level view of the pathology of its target disciplines, paleontology and paleoanthropology, along the lines that David Hull demonstrated for numerical taxonomy some decades ago. That many thousands of bad methodologies require some sort of explanation all of their own in terms of (a) the fears of
    biologists about advanced mathematics, (b) the need for publications and tenure, (c) the desirability of covers of Nature and Science, and (d) the even greater glory of getting to name a new "species." This cumulative pathology of science results in paleoanthro turning into a branch of the humanities, where no single conclusion is treated as stable beyond the next dig, the next year or so of applied genomics, and the next chemical trace analysis. In short, the field is not cumulative.

    It is not obvious that the authors' suggestion of supervised machine learning will remedy this situation, since (a) that field itself is undergoing massive changes month by month with the advent of applications AI, and even more relevant (b) the best ML algorithms, those based on deep neural nets, are (literally) unpublishable - we cannot see how their decisions have actually been computed. Instead, to stabilize, the field will need to figure out how to base its inferences on some syntheses of actual empirical theories.

    It's not that this reviewer is cynical, but it is fair to suggest a revision conveying a concern for the truly striking lack of organized skepticism in the literature that is being critiqued here. A revision along those lines would serve as a flagship example of exactly the deeper argument that reference (17) was trying to seed, that the applied literature obviously needs a hundred times more of. Such a review would do the most good if it appeared in one of the same journals - AJBA, Evolution, Journal of Human Evolution, Paleobiology - where the bulk of the most highly cited misuses of PCA themselves have appeared.

  4. Reviewer #3 (Public Review):

    Mohseni and Elhaik challenge the widespread use of PCA as an analytical and interpretive tool in the study of geometric morphometrics. The standard approach in geometric morphometrics analysis involves Generalised Procrustes Analysis (GPA) followed by Principal Component Analysis (PCA). Recent research challenges PCA outcomes' accuracy, robustness, and reproducibility in morphometrics analysis. In this paper, the authors demonstrate that PCA is unreliable for such studies. Additionally, they test and compare several Machine-Learning methods and present MORPHIX, a Python package of their making that incorporates the tools necessary to perform morphometrics analysis using ML methods.

    Mohseni and Elhaik conducted a set of thorough investigations to test PCA's accuracy, robustness, and reproducibility following renewed recent criticism and publications where this method was abused. Using a set of 2 and 3D morphometric benchmark data, the authors performed a traditional analysis using GPA and PCA, followed by a reanalysis of the data using alternative classifiers and rigorous testing of the different outcomes.

    In the current paper, the authors evaluated eight ML methods and compared their classification accuracy to traditional PCA. Additionally, common occurrences in the attempted morphological classification of specimens, such as non-representative partial sampling, missing specimens, and missing landmarks, were simulated, and the performance of PCA vs ML methods was evaluated.

    The main problem with this manuscript is that it is three papers rolled into one, and the link doesn't work. The title promises a new Python package, but the actual text of the manuscript spends relatively little time on the Python package itself and barely gives any information about the package and what it includes or its usefulness. It is definitely not the focus of the manuscript. The main thrust of the manuscript, which takes up most of the text, is the analysis of the papionin dataset, which shows very convincingly that PCA underperforms in virtually all conditions tested. In addition, the manuscript includes a rather vicious attack against two specific cases of misuse of PCA in paleoanthropological studies, which does not connect with the rest of the manuscript at all.

    If the manuscript is a criticism of PCA techniques, this should be reflected in the title. If it is a report of a new Python package, it should focus on the package. Otherwise, there should be two separate manuscripts here.

    The criticism of PCA is valid and important. However, pointing out that it is problematic in specific cases and is sometimes misused does not justify labeling tens of thousands of papers as questionable and does not justify vilifying an entire discipline. The authors do not make a convincing enough case that their criticism of the use of PCA in analyzing primate or hominin skulls is relevant to all its myriad uses in morphometrics. The criticism is largely based on statistical power, but it is framed as though it is a criticism of geometric morphometrics in general.