Cervical cancer is the second most common type of cancer for women. Existing screening programs for cervical cancer, such as Pap Smear, suffer from low sensitivity. Thus, many patients who are ill are not detected in the screening process. Using images of the cervix as an aid in cervical cancer screening has the potential to greatly improve sensitivity, and can be especially useful in resource poor regions of the world. In this work, we develop a datadriven computer algorithm for interpreting cervical images based on color and texture.
We are able to obtain 74% sensitivity and 90% specificity when differentiating highgrade cervical lesions from lowgrade lesions and normal tissue. On the same dataset, using Pap tests alone yields a sensitivity of 37% and specificity of 96%, and using HPV test alone gives a 57% sensitivity and 93% specificity. Furthermore, we develop a comprehensive algorithmic framework based on MultiModal Entity Coreference for combining various tests to perform disease classification and diagnosis.
When integrating multiple tests, we adopt information gain and gradientbased approaches for learning the relative weights of different tests. In our evaluation, we present a novel algorithm that integrates cervical images, Pap, HPV and patient age, which yields 83.21% sensitivity and 94.79% specificity, a statistically significant improvement over using any single source of information alone.
Dezhao Song, Edward Kim, Xiaolei Huang, Joseph Patruno, Héctor Muñoz-Avila, Jeff Heflin, L. Rodney Long, and Sameer Antani, Multimodal entity coreference for cervical dysplasia diagnosis, IEEE Transactions on Medical Imaging (IEEE TMI) 34 (2015), no. 1, 229–245..