Abstract Biological neuronal networks (BNNs) are a source of inspiration and analogy making for researchers that focus on artificial neuronal networks (ANNs). Moreover, neuroscientists increasingly use ANNs as a model for the brain. Despite certain similarities between these two types of networks, important differences can be discerned. First, biological neural networks are sculpted by evolution and the constraints that it entails, whereas artificial neural networks are engineered to solve particular tasks. Second, the network topology of these systems, apart from some analogies that can be drawn, exhibits pronounced differences. Here, we examine strategies to construct recurrent neural networks (RNNs) that instantiate the network topology of brains of different species. We refer to such RNNs as bio-instantiated. We investigate the performance of bio-instantiated RNNs in terms of: i) the prediction performance itself, that is, the capacity of the network to minimize the desired function at hand in test data, and ii) speed of training, that is, how fast during training the network reaches its optimal performance. We examine bio-instantiated RNNs in working memory tasks where task-relevant information must be tracked as a sequence of events unfolds in time. We highlight the strategies that can be used to construct RNNs with the network topology found in BNNs, without sacrificing performance. Despite that we observe no enhancement of performance when compared to randomly wired RNNs, our approach demonstrates how empirical neural network data can be used for constructing RNNs, thus, facilitating further experimentation with biologically realistic network topologies, in contexts where such aspect is desired.