Federated learning (FL) plays an important role in collaborative distributed modeling. However, most studies cannot address poor generalization of out-of-distribution (OoD) data. Efforts have been exerted to address data heterogeneity among participants, but yielding limited success. Here, we propose an information bottleneck based FL method (FedIB), which aims to build a model with better OoD generalization. We extract the domain-invariance of different source domains to mitigate the domain heterogeneity under the cross-silo scenarios. Next, given the scale imbalance, we balance the representation importance of different domains with reweighting a better invariance across multiple domains. In addition, the convergence of FedIB is analyzed. As opposed to aligning distributions or eliminating redundancy by previous methods, FedIB achieves better domain generalization explicitly by eliminating the pseudo-invariant features. Finally, we conduct extensive experiments on various datasets revealing that FedIB has superior performances facing OoD and scale imbalance scenarios in distributed modeling.