Artificial intelligence (AI) research is fast approaching, or perhaps has already reached, a bottleneck whereby further advancement towards practical human-like reasoning in complex tasks needs further quantified input from large studies of human decision-making. Previous studies in psychology, for example, often rely on relatively small cohorts and very specific tasks. These studies have strongly influenced some of the core notions in AI research such as the reinforcement learning and the exploration versus exploitation paradigms. With the goal of contributing to this direction in AI developments we present our findings on the evolution towards world-class decision-making across large cohorts of subjects in the formidable game of Go. Some of these findings directly support previous work on how experts develop their skills but we also report on several previously unknown aspects of the development of expertise that suggests new avenues for AI research to explore. In particular, at the level of play that has so far eluded current AI systems for Go, we are able to quantify the lack of 'predictability' of experts and how this changes with their level of skill.