Conventional federated learning (FL) coordinated by a central server focuses on training a global model and protecting the privacy of clients' training data by storing it locally. However, the statistical heterogeneity hinders the global model from adapting to the non-IID distributions among clients. Moreover, untrusted and unreliable central servers and malicious clients may compromise model integrity and availability, thus degrading the robustness of FL. To address these challenges, we present RobustPFL, a decentralized personalized federated learning (PFL) approach that combines $\alpha$ -based Layer-position Normalized Similarity ( $\alpha$ -LNS) and local collaborative training to improve personalized performance while utilizing a blockchain-based committee mechanism to coordinate the aggregation process, thereby achieving high personalized accuracy and robustness. Extensive experiments show that our RobustPFL approach outperforms multiple algorithms, including Local training, FedAvg, FedReptile, Per-FedAvg, FedBN, and SPFL, on MNIST, CIFAR10, EMNIST, and N-BaIoT datasets in four non-IID settings. We also evaluate RobustPFL's effectiveness against attacks—poisoning attacks and free-riding attacks. Particularly, for three prevalent poisoning attacks (backdoor, label flipping, and model poisoning attacks), we compare non-defensive (FedAvg) and defensive (Krum, trimmed mean, Bulyan, FedBN, FLAME, and FangTrmean) methods with our proposed RobustPFL. The results show that our approach achieves significant defensive effects.