Visual acuity is better for vertical and horizontal compared to other orientations. This cross-species phenomenon is often explained by "efficient coding", whereby more neurons show sharper tuning for the orientations most common in natural vision. However, it is unclear if experience alone can account for such biases. Here, we measured orientation representations in a convolutional neural network, VGG-16, trained on modified versions of ImageNet (rotated by 0{degrees}, 22.5{degrees}, or 45{degrees} counter-clockwise of upright). Discriminability for each model was highest near the orientations that were most common in the networks training set. Furthermore, there was an over-representation of narrowly tuned units selective for the most common orientations. These effects emerged in middle layers and increased with depth in the network. Biases emerged early in training, consistent with the possibility that non-uniform representations may play a functional role in the networks task performance. Together, our results suggest that biased orientation representations can emerge through experience with a non-uniform distribution of orientations, supporting the efficient coding hypothesis.
Support the authors with ResearchCoin
Support the authors with ResearchCoin