Robust textual inference via learning and abductive reasoning
We present a system for textual inference (the task of inferring whether a sentence follows from another text) that uses learning and a logical-formula semantic representation of the text. More precisely, our system begins by parsing and then transforming sentences into a logical formula-like representation similar to the one used by (Harabagiu et al., 2000). An abductive theorem prover then tries to find the minimum “cost” set of assumptions necessary to show that one statement follows from the other. These costs reflect how likely different assumptions are, and are learned automatically using information from syntactic/semantic features and from linguistic resources such as WordNet. If one sentence follows from the other given only highly plausible, low cost assumptions, then we conclude that it can be inferred. Our approach can be viewed as combining statistical machine learning and classical logical reasoning, in the hope of marrying the robustness and scalability of learning with the preciseness and elegance of logical theorem proving. We give experimental results from the recent PASCAL RTE 2005 challenge competition on recognizing textual inferences, where a system using this inference algorithm achieved the highest confidence weighted score.