Recent work has revealed that mice do not rely on a stable strategy during perceptual decision-making, but switch between multiple strategies within a single session [1, 2]. However, this switching behavior has not yet been characterized in non-stationary environments, and the factors that govern switching remain unknown. Here we address these questions using an internal state model with input-driven transitions. Our approach relies on a hidden Markov model (HMM) with two sets of per-state generalized linear models (GLMs): a set of Bernoulli GLMs for modeling the animal state- and stimulus-dependent choice on each trial, and a multinomial GLM for modeling input-dependent transitions between states. We used this model to analyze a dataset from the International Brain Laboratory (IBL), in which mice performed a binary decision-making task with non-stationary stimulus statistics. We found that mouse behavior in this task was accurately described by a four-state model. This model contained two "engaged" states, in which performance was good despite slight left and right biases, and two "disengaged" states, where performance was low and exhibited with larger left and right biases, respectively. Our analyses revealed that mice preferentially used left-bias strategies during left-bias stimulus blocks, and right-bias strategies during right-bias stimulus blocks, meaning that they could achieve reasonably high performance even in disengaged states simply by biasing choice toward the side with greater prior probability. Our model showed that past choices and past stimuli predicted transitions between left- and right-bias states, while past rewards predicted transitions between engaged and disengaged states. In particular, greater past reward predicted transition to disengaged states, suggesting that disengagement may be associated with satiety.