News
Automated Crossword Solver Outperforms Humans for the First Time
- By John K. Waters
- 05/23/2022
For the first time, a program developed at the Berkeley AI Research (BAIR) lab has beaten human opponents in one of the world's most prestigious crossword puzzle competitions. The Berkeley Crossword Solver (BCS), which uses state-of-the-art neural network models for open-domain question-answering and assorted techniques, bested competitors in the annual American Crossword Puzzle Tournament.
The tournament, organized by New York Times crossword editor Will Shortz, is considered to be among the toughest crossword puzzle challenges in the world. The BCS was able to solve all seven puzzles presented at tournament in under a minute, outperforming the best human competitors.
The BCS algorithm is trained on a massive database of more than six million question-answer pairs from historical crosswords dating back 70 years. "Our system works by generating answer candidates for each crossword clue using neural question answering models and then combines loopy belief propagation with local search to find full puzzle solutions," the researchers wrote in their paper, "Automated Crossword Solving."
Belief propagation is a dynamic programming approach to answering conditional probability queries in a graphical model. When the graph contains loops (or cycles), it's called loopy belief propagation.
The BCS was designed to solve American-style crossword puzzles, which often involve challenging themes, puns, and world knowledge, and typically range in grid size from 15x15 to 21x21. It's not as effective with what are known as cryptic crosswords, a British style that involves more metalinguistic reasoning clues, such as anagrams, which will likely require different methods from those employed by the BCS.
"Our system outperforms even the best human solvers and can solve puzzles from a wide range of domains with perfect accuracy," the researchers wrote. "Despite this progress, some challenges remain in crossword solving, especially on the QA side, and we hope to spur future research in this direction by releasing a large dataset of question-answer pairs. In future work, we hope to design new ways of evaluating automated crossword solvers, including testing on puzzles that are designed to be difficult for computers and tasking models with puzzle generation."
The BCS code is available on the project’s GitHub repository.
About the Author
John K. Waters is the editor in chief of a number of Converge360.com sites, with a focus on high-end development, AI and future tech. He's been writing about cutting-edge technologies and culture of Silicon Valley for more than two decades, and he's written more than a dozen books. He also co-scripted the documentary film Silicon Valley: A 100 Year Renaissance, which aired on PBS. He can be reached at jwaters@converge360.com.