Unmanned aerial vehicles (UAVs) integrated with computer vision technology have emerged as an effective method for information acquisition in various applications. However, due to the small proportion of target pixels and susceptibility to background interference in multi-angle UAV imaging, missed detections and false results frequently occur. To address this issue, a small target detection algorithm, EDANet, is proposed based on YOLOv8. First, the backbone network is replaced by EfficientNet, which can dynamically explore the network size and the image resolution using a scaling factor. Second, the EC2f feature extraction module is designed to achieve unique coding in different directions through parallel branches. The position information is effectively embedded in the channel attention to enhance the spatial representation ability of features. To mitigate the low utilization of small target pixels, we introduce the DTADH detection module, which facilitates feature fusion via a feature-sharing interactive network. Simultaneously, a task alignment predictor assigns classification and localization tasks. In this way, not only is feature utilization optimized, but also the number of parameters is reduced. Finally, leveraging logic and feature knowledge distillation, we employ binary probability mapping of soft labels and a soft label weighting strategy to enhance the algorithm’s learning capabilities in target classification and localization. Experimental validation on the UAV aerial dataset VisDrone2019 demonstrates that EDANet outperforms existing methods, reducing GFLOPs by 39.3% and improving Map by 4.6%.