Abstract Deciphering spatial domains using spatially resolved transcriptomics (SRT) is of great value for the characterizing and understanding of tissue architecture. However, the inherent heterogeneity and varying spatial resolutions present challenges in the joint analysis of multi-modal SRT data. We introduce a multi-modal geometric deep learning method, named stMMR, to effectively integrate gene expression, spatial location and histological information for accurate identifying spatial domains from SRT data. stMMR uses graph convolutional networks (GCN) and self-attention module for deep embedding of features within unimodal and incorporates similarity contrastive learning for integrating features across modalities. Comprehensive benchmark analysis on various types of spatial data shows superior performance of stMMR in multiple analyses, including spatial domain identification, pseudo-spatiotemporal analysis, and domain-specific gene discovery. In chicken heart development, stMMR reconstruct the spatiotemporal lineage structures indicating accurate developmental sequence. In breast cancer and lung cancer, stMMR clearly delineated the tumor microenvironment and identified marker genes associated with diagnosis and prognosis. Overall, stMMR is capable of effectively utilizing the multi-modal information of various SRT data to explore and characterize tissue architectures of homeostasis, development and tumor.