In recent years, autonomous decision-making has emerged as a critical technology in air-to-air confrontation scenarios, garnering significant attention. This paper presents a novel AI algorithm, the Missile Hit Probability Enhanced Actor-Critic (MHPAC), designed for autonomous decision-making in such confrontations, whose primary objective is to maximize the probability of defeating opponents while minimizing the risk of being shot down. By incorporating a pre-trained Missile Hit Probability (MHP) model into reward shaping and exploration within the framework of Reinforcement Learning (RL), the MHPAC algorithm enhances the learning capabilities of the Actor-Critic (AC) algorithm specifically tailored for air-to-air confrontation scenarios. Furthermore, the MHP model is also integrated into the confrontation strategy to inform missile launch decisions. Using the MHPAC algorithm, the confrontation strategy is achieved via the training strategy of curriculum learning and self-play learning. Results demonstrate that the MHPAC algorithm effectively explores efficient maneuvering strategies for missile launch and defense, overcoming challenges associated with sparse and delayed reward signals. The decision-making capabilities of the integrated maneuvering and missile launch strategy is significantly enhanced by the proposed MHPAC algorithm, with a relative win ratio of over 65% against different strategies. Moreover, the trained strategy only needs 0.039s for real-time decision-making. This research holds considerable promise for achieving air superiority and mission success in complex and dynamic aerial environments.