According to their orbits, meteorological satellites can be divided into polar-orbiting satellites and geostationary satellites. Existing cloud detection methods mainly focus on polar-orbiting satellite datasets. The geostationary satellite datasets contain, in contrast, time-continuous frames of particular locations. The temporal consistent information in these consecutive frames aid to increase the detection accuracy, but is challenging to be exploited. Besides, powered by the advanced technology of satellites, the onboard cloud detection application becomes a trend. Considering that satellites have resource limitations on energy and storage, applications deployed on them should be lightweight enough. However, the existing cloud detection models never concentrated on this lightweight video cloud detection task before. In this task, the temporal consistent features provided by time-continuous frames should be exploited for accuracy enhancement with low resource consumption. To tackle this problem, we design a lightweight deep learning video cloud detection model: Adaptive Memory Attention Network (AMANet). The proposed network is based on the encoder–decoder structure. The encoder consists of two branches. In the main branch, spatial and semantic features of the current frame are extracted. In the TemporalAttentionFlow branch, the proposed PyramidEncodingModule adaptively extracts context information from frames in sequence based on their distance to the current frame. In addition, in the proposed AdaptiveMemoryAttentionModule, the temporal relation among frames is extracted and propagated adaptively. The lightweight decoder is designed to gradually recover the cloud masks to the same scale as the input image. Experiments on a Video Cloud Detection dataset based on the dataset Fengyun4aCloud demonstrate that the designed AMANet achieves a remarkable balance between accuracy and resource consumption in comparison with current cloud detection methods, lightweight semantic segmentation methods, and video semantic segmentation methods.
Support the authors with ResearchCoin