An integrated image-to-mesh conversion and machine learning framework for gene expression pattern image analysis. Wenlu Zhang1, Daming Feng1, Andrey Chernikov1, Nikos Chrisochoides1, Sudhir Kumar2,3, Shuiwang Ji1. 1) Department of Computer Science, Old Dominion University, Norfolk, VA; 2) Center for Evolutionary Medicine and Informatic, The Biodesign Institute, Arizona State University, Tempe, AZ; 3) School of Life Sciences, Arizona State University, Tempe, AZ 85287.

   Analysis of the spatiotemporal gene expression patterns is essential for understanding the regulatory networks driving development. We made use of the in situ images from the Berkeley Drosophila Genome Project to study the gene regulations during early Drosophila embryonic development. Previous image-based methods are limited to image registration using a perfect ellipse and clustering of the embryonic locations or the genes into groups separately. In this work, we created a highly flexible mesh generation framework for performing image to mesh conversion and non-rigid image registration. Our methods provide a faithful representation of the developing embryos by incorporating the distortions on the images. Based on the common coordinate framework resulted from the non-rigid registration, we proposed a soft co-clustering machine learning method to co-cluster genes and embryonic locations with similar expression patterns. Experimental results indicate that our image registration approaches are more accurate than prior method. Additionally, simultaneous clustering of genes and embryonic locations leads to more biologically significant results.