Multi-Agent Systems (MAS), which consist of multiple interacting agents, are crucial in Cyber-Physical Systems (CPS), because they improve system adaptability, efficiency, and robustness through parallel processing and collaboration. However, most existing unsupervised meta-learning methods are centralized and not suitable for multi-agent systems where data are distributed stored and inaccessible to all agents. Meta-GMVAE, based on Variational Autoencoder (VAE) and set-level variational inference, represents a sophisticated unsupervised meta-learning model that improves generative performance by efficiently learning data representations across various tasks, increasing adaptability and reducing sample requirements. Inspired by these advancements, we propose a novel Distributed Unsupervised Meta-Learning (DUML) framework based on Meta-GMVAE and a fusion strategy. Furthermore, we present a DUML algorithm based on Gaussian Mixture Model (DUMLGMM), where the parameters of the Gaussian-mixture is solved by an Expectation-Maximization algorithm. Simulations on Omniglot and MiniImageNet datasets show that DUMLGMM can achieve the performance of the corresponding centralized algorithm and outperform non-cooperative algorithm.