In complex tasks, high performers often have better strategies than low performers, even with similar amounts of practice. Relatively little research has examined how people form and change strategies in tasks that permit a large set of strategies. One challenge with such research is identifying strategies based on behavior. Three algorithms were developed that track the task features people use in their strategies while performing a complex task. Two of these algorithms were based on task-general, machine-learning classifiers: a support vector machine and a decision tree algorithm. The third was a task-specific algorithm. Data from several strategies in a complex task were simulated, and the algorithms were tested to see how well they identified the underlying features of the simulated strategy. The two machine-learning classifiers performed better than the task-specific algorithm. However, the two classifiers differed on how well they identified different types of strategies. The first two studies show that the ability of these algorithms to recover the underlying strategy depends on the complexity of the strategy relative to the quantity of performance data available. If the underlying strategy changes too frequently, then the performance of the algorithms suffers. However, results from the third study show that it is possible to use these algorithms to track strategy changes that occur in a task. The fourth study examines performance on data from human participants. This approach to tracking strategy exploration may enable further development of theories about how people search for and select effective strategies.