Approximate computing (AxC) has recently emerged as a successful approach for optimizing energy consumption in error-tolerant applications, such as deep neural networks (DNNs). The enormous model size and high computation cost of DNNs present significant challenges for deployment in energy-efficient and resource-constrained computing systems. Emerging DNN hardware accelerators based on AxC designs selectively approximate the non-critical segments of computation to address these challenges. However, a systematic and principled approach that incorporates domain knowledge and approximate hardware for optimal approximation is still lacking. In this paper, we propose a probabilistic-oriented AxC (PAxC) framework that provides high energy savings with acceptable quality by considering the overall probability effect of approximation. To achieve aggressive approximate designs, we utilize the minimum likelihood error to determine the AxC synergy profile at both application and circuit levels. This enables effective coordination of the trade-off between energy and accuracy. Compared with a baseline design, the power-delay product (PDP) is significantly reduced by up to 83.66% with an acceptable accuracy reduction. Simulation and a case study of the image process validate the effectiveness of the proposed framework.