Reinforcement learning (RL), particularly in primates, is often driven by symbolic outcomes. However, it is usually studied with primary reinforcers. To examine the neural mechanisms underlying learning from symbolic outcomes, we trained monkeys on a task in which they learned to choose options that led to gains of tokens and avoid choosing options that led to losses of tokens. We then recorded simultaneously from the orbitofrontal cortex (OFC), ventral striatum (VS), amygdala (AMY), and mediodorsal thalamus (MDt). We found that the OFC played a dominant role in coding token outcomes and token prediction errors. The other areas contributed complementary functions, with the VS coding appetitive outcomes and the AMY coding the salience of outcomes. The MDt coded actions and relayed information about tokens between the OFC and VS. Thus, the OFC leads the processing of symbolic RL in the ventral frontostriatal circuitry.