DeepMind Offers Mathematical Dataset for Training Reasoning in Neural Models
London-based and Google-owned artificial intelligence (AI) research and solution provider DeepMind today released a "large-scale extendable dataset of mathematical questions for training (and evaluating the abilities of) neural models that can reason algebraically," according to the tweet announcing the release.
The dataset can be found on GitHub here, and a related paper on how the dataset can be used to evaluate and train reasoning in neural models can be found here.
According to the dataset's documentation, the code generates "mathematical question and answer pairs from a range of question types...[at] roughly school-level difficulty."
The current version, 1.0, contains 2 million question and answer pairs per model, with questions types labeled as easy, medium or hard so that training can be leveled or mixed.
Some of the many types of mathematical problems included in the dataset are algebra (sequences, linear equations, polynomial roots), calculus (differentiation), measurement (conversion, time-related), and basic arithmetic (mixed expressions, pairwise operations).
Setup instructions for working with the dataset with Python 3 can be found here.
Becky Nagel is the vice president of Web & Digital Strategy for 1105's Converge360 Group, where she oversees the front-end Web team and deals with all aspects of digital strategy. She also serves as executive editor of the group's media Web sites, and you'll even find her byline on PureAI.com, the group's newest site for enterprise developers working with AI. She recently gave a talk at a leading technical publishers conference about how changes in Web technology may impact publishers' bottom lines. Follow her on twitter @beckynagel.