DeepMind Offers Mathematical Dataset for Training Reasoning in Neural Models
London-based and Google-owned artificial intelligence (AI) research and solution provider DeepMind today released a "large-scale extendable dataset of mathematical questions for training (and evaluating the abilities of) neural models that can reason algebraically," according to the tweet announcing the release.
The dataset can be found on GitHub here, and a related paper on how the dataset can be used to evaluate and train reasoning in neural models can be found here.
According to the dataset's documentation, the code generates "mathematical question and answer pairs from a range of question types...[at] roughly school-level difficulty."
The current version, 1.0, contains 2 million question and answer pairs per model, with questions types labeled as easy, medium or hard so that training can be leveled or mixed.
Some of the many types of mathematical problems included in the dataset are algebra (sequences, linear equations, polynomial roots), calculus (differentiation), measurement (conversion, time-related), and basic arithmetic (mixed expressions, pairwise operations).
Setup instructions for working with the dataset with Python 3 can be found here.
Becky Nagel is the vice president of Web & Digital Strategy for 1105's Converge360 Group, where she oversees the front-end Web team and deals with all aspects of digital projects at the company, including launching and running the group's popular virtual summit and Coffee talk series . She an experienced tech journalist (20 years), and before her current position, was the editorial director of the group's sites. A few years ago she gave a talk at a leading technical publishers conference about how changes in Web browser technology would impact online advertising for publishers. Follow her on twitter @beckynagel.