Large language models are widely adopted in a range of natural language tasks, such as question-answering, common sense reasoning, and summarization. These models, however, have had difficulty with tasks requiring quantitative reasoning, such as resolving issues in mathematics, physics, and engineering.
Researchers find quantitative reasoning an intriguing application for language models as they put language models to the test in various ways. The ability to accurately parse a query with normal language and mathematical notation, remember pertinent formulas and constants and produce step-by-step answers requiring numerical computations and symbolic manipulation are necessary for solving mathematical and scientific problems. Therefore, scientists have believed that machine learning models will require significant improvements in model architecture and training methods to solve such reasoning problems.
A new Google research introduces Minerva, a language model that uses sequential reasoning to answer mathematical and scientific problems. Minerva resolves such problems by providing solutions incorporating numerical computations and symbolic manipulation.