In recent years, object detection has significantly advanced by using deep learning, especially convolutional neural networks. Most of the existing methods have focused on detecting objects under favorable weather conditions and achieved impressive results. However, object detection in the presence of rain remains a crucial challenge owing to the visibility limitation. In this paper, we introduce an Amalgamating Knowledge Network (AK-Net) to deal with the problem of detecting objects hampered by rain. The proposed AK-Net obtains performance improvement by associating object detection with visibility enhancement, and it is composed of five subnetworks: rain streak removal (RSR) subnetwork, raindrop removal (RDR) subnetwork, foggy rain removal (FRR) subnetwork, feature transmission (FT) subnetwork, and object detection (OD) subnetwork. Our approach is flexible; it can adopt different object detection models to construct the OD subnetwork for the final inference of objects. The RSR, RDR, and FRR subnetworks are responsible for producing clean features from rain streak, raindrop, and foggy rain images, respectively, and offer them to the OD subnetwork through the FT subnetwork for efficient object prediction. Experimental results indicate that the mean average precision (mAP) achieved by our proposed AK-Net was up to 19.58 \(\%\) and 26.91 \(\%\) higher than those produced using competitive methods on published iRain and RID datasets, respectively, while preserving the fast-running time of the baseline detector.