文摘
Human-level AI involves the ability to reason about the beliefs of other agents, even when those other agents have reasoning styles that may be very different than the AI’s. The ability to carry out reasonable inferences in such situations, as well as in situations where an agent must reason about the beliefs of another agent’s beliefs about yet another agent, is under-studied. We show how such reasoning can be carried out in a new variant of the cognitive event calculus we call \(\mathcal {CEC}_\mathtt {AC}\), by introducing several new powerful features for automated reasoning: First, the implementation of classical logic at the “system-level” and nonclassical logics at the “belief-level”; Second, \(\mathcal {CEC}_\mathtt {AC}\) treats all inferences made by agents as actions. This opens the door for two more additional features: epistemic boxes, which are a sort of frame in which the reasoning of an individual agent can be simulated, and evaluated codelets, which allow our reasoner to carry out operations beyond the limits of many current systems. We explain how these features are achieved and implemented in the MATR reasoning system, and discuss their consequences.