Command Palette
Search for a command to run...
Cristina Cornelio; Veronika Thost

Abstract
Logical rules are a popular knowledge representation language in many domains, representing background knowledge and encoding information that can be derived from given facts in a compact form. However, rule formulation is a complex process that requires deep domain expertise,and is further challenged by today's often large, heterogeneous, and incomplete knowledge graphs. Several approaches for learning rules automatically, given a set of input example facts,have been proposed over time, including, more recently, neural systems. Yet, the area is missing adequate datasets and evaluation approaches: existing datasets often resemble toy examples that neither cover the various kinds of dependencies between rules nor allow for testing scalability. We present a tool for generating different kinds of datasets and for evaluating rule learning systems, including new performance measures.
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| inductive-logic-programming-on-rudas | Neural-LP | H-Score: 0.1025 R-Score: 0.1906 |
| inductive-logic-programming-on-rudas | FOIL | H-Score: 0.152 R-Score: 0.2728 |
| inductive-logic-programming-on-rudas | AMIE+ | H-Score: 0.2321 R-Score: 0.335 |
| inductive-logic-programming-on-rudas | NTP | H-Score: 0.0728 R-Score: 0.1811 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.