Tensor completion aims at filling in the missing elements of an incomplete tensor based on its partial observations, which is a popular approach for image inpainting. Most existing methods for visual data recovery can be categorized into traditional optimization-based and neural network-based methods. The former usually adopt a low-rank assumption to handle this ill-posed problem, enjoying good interpretability and generalization. However, as visual data are only approximately low rank, handcrafted low-rank priors may not capture the complex details properly, limiting the recovery performance. For neural network-based methods, despite their impressive performance in image inpainting, sufficient training data are required for parameter learning, and their generalization ability on the unseen data is a concern. In this paper, combining the advantages of these two distinct approaches, we propose a tensor Completion neural Network (CNet) for visual data completion. The CNet is comprised of two parts, namely, the encoder and decoder. The encoder is designed by exploiting the CANDECOMP/PARAFAC decomposition to produce a low-rank embedding of the target tensor, whose mechanism is interpretable. To compensate the drawback of the low-rank constraint, a decoder consisting of several convolutional layers is introduced to refine the low-rank embedding. The CNet only uses the observations of the incomplete tensor to recover its missing entries and thus is free from large training datasets. Extensive experiments in inpainting color images, grayscale video sequences, hyperspectral images, color video sequences, and light field images are conducted to showcase the superiority of CNet over state-of-the-art methods in terms of restoration performance.