Large language models (LLMs) have demonstrated impressive capabilities in natural language understanding and generation, but
they exhibit problems with logical consistency in the output they generate. How can we harness LLMs’ broad-coverage parametric
knowledge in formal reasoning despite their inconsistency? We present a method for directly integrating an LLM into the interpretation
function of the formal semantics for a paraconsistent logic. We provide experimental evidence for the feasibility of the method
by evaluating the function using datasets created from several short-form factuality benchmarks. Unlike prior work, our method
offers a theoretical framework for neurosymbolic reasoning that leverages an LLM’s knowledge while preserving the underlying
logic’s soundness and completeness properties.