The brain messenger dopamine is traditionally known as the 'pleasure molecule', linked with our desire for food and sex, as well as drug and gambling addictions. The precise function of dopamine in humans has remained elusive, and theories have relied almost exclusively on animal experiments. Using brain imaging technology, Pessiglione et al. scanned healthy human volunteers as they gambled for money after taking drugs that interfere with dopamine signals. Volunteers with boosted dopamine became better gamblers than their dopamine-suppressed counterparts. When dopamine levels were either enhanced or reduced by drugs, the scans showed that both reward-related learning and associated striatal activity are modulated, confirming the critical role of dopamine in integrating reward information for generation future decisions. An fMRI study of healthy human volunteers finds that when dopamine levels are either enhanced or reduced by drugs, both reward-related learning and associated striatal activity are modulated, confirming the critical role of dopamine in integrating reward information for future decisions. Theories of instrumental learning are centred on understanding how success and failure are used to improve future decisions1. These theories highlight a central role for reward prediction errors in updating the values associated with available actions2. In animals, substantial evidence indicates that the neurotransmitter dopamine might have a key function in this type of learning, through its ability to modulate cortico-striatal synaptic efficacy3. However, no direct evidence links dopamine, striatal activity and behavioural choice in humans. Here we show that, during instrumental learning, the magnitude of reward prediction error expressed in the striatum is modulated by the administration of drugs enhancing (3,4-dihydroxy-l-phenylalanine; l-DOPA) or reducing (haloperidol) dopaminergic function. Accordingly, subjects treated with l-DOPA have a greater propensity to choose the most rewarding action relative to subjects treated with haloperidol. Furthermore, incorporating the magnitude of the prediction errors into a standard action-value learning algorithm accurately reproduced subjects' behavioural choices under the different drug conditions. We conclude that dopamine-dependent modulation of striatal activity can account for how the human brain uses reward prediction errors to improve future decisions.