HyperAIHyperAI

Command Palette

Search for a command to run...

4 months ago

Learning to Reason in Large Theories without Imitation

Kshitij Bansal; Christian Szegedy; Markus N. Rabe; Sarah M. Loos; Viktor Toman

Learning to Reason in Large Theories without Imitation

Abstract

In this paper, we demonstrate how to do automated theorem proving in the presence of a large knowledge base of potential premises without learning from human proofs. We suggest an exploration mechanism that mixes in additional premises selected by a tf-idf (term frequency-inverse document frequency) based lookup in a deep reinforcement learning scenario. This helps with exploring and learning which premises are relevant for proving a new theorem. Our experiments show that the theorem prover trained with this exploration mechanism outperforms provers that are trained only on human proofs. It approaches the performance of a prover trained by a combination of imitation and reinforcement learning. We perform multiple experiments to understand the importance of the underlying assumptions that make our exploration approach work, thus explaining our design choices.

Benchmarks

BenchmarkMethodologyMetrics
automated-theorem-proving-on-holist-benchmarkBoW2 (extra -ves)
Percentage correct: 36.55

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Learning to Reason in Large Theories without Imitation | Papers | HyperAI