Graph convolutional network (GCN) has gained considerable attention and has been widely utilized in graph data analytics. However, training large GCNs presents considerable challenges owing to the inherent complexity of graph-structured data. Previous training algorithms frequently struggle with slow convergence speed caused by full-batch gradient descent on entire graphs and reduced model performance due to inappropriate node sampling methods. To address these issues, we propose a novel framework called Progressive Granular Ball Sampling Fusion (PGBSF). PGBSF leverages granular ball sampling to partition the original graph into a collection of subgraphs, thereby enhancing both training efficiency and detail capture. Then, it applies a progressive approach accompanied by a parameter-sharing strategy for incremental GCN model training, which results in robust performance and rapid convergence speed. This simple yet effective strategy considerably enhances classification accuracy and memory efficiency. The experiment results show that our proposed architecture consistently outperforms other baseline models in terms of accuracy across almost all datasets with different label rates. In addition, PGBSF improves GCN performance significantly on large and complex datasets. Moreover, GCN+PGBSF reduces time complexity by training on subgraphs and achieves the fastest convergence speed among all models, with a relatively small variance in loss during training.