Alzheimers disease (AD) is the primary cause of dementia worldwide (1), with an increasing morbidity burden that may outstrip diagnosis and management capacity as the population ages. Current methods integrate patient history, neuropsychological testing and magnetic resonance imaging (MRI) to identify likely cases, yet effective practices remain variably-applied and lacking in sensitivity and specificity (2). Here we report an explainable deep learning strategy that delineates unique AD signatures from multimodal inputs of MRI, age, gender, and mini-mental state examination (MMSE) score. Our framework linked a fully convolutional network (FCN) to a multilayer perceptron (MLP) to construct high resolution maps of disease probability from local brain structure. This enabled precise, intuitive visualization of individual AD risk en route to accurate diagnosis. The model was trained using clinically-diagnosed AD and cognitively normal (NC) subjects from the Alzheimers Disease Neuroimaging Initiative (ADNI) dataset (n=417) (3), and validated on three independent cohorts: the Australian Imaging, Biomarker & Lifestyle Flagship Study of Ageing (AIBL, n=382) (4), the Framingham Heart Study (FHS, n=102) (5), and the National Alzheimers Coordinating Center (NACC, n=582) (6). Model performance was consistent across datasets, with mean accuracy values of 0.966, 0.948, 0.815, and 0.916 for ADNI, AIBL, FHS and NACC, respectively. Moreover, our approach exceeded the diagnostic performance of a multi-institutional team of practicing neurologists (n=11), and high-risk cerebral regions predicted by the model closely tracked postmortem histopathological findings. This framework provides a clinically-adaptable strategy for using routinely available imaging techniques such as MRI to generate nuanced neuroimaging signatures for AD diagnosis, as well as a generalizable approach for linking deep learning to pathophysiological processes in human disease.