Our agent's model of the world is represented as a set of ground literals stored in a database . Since is incomplete, the closed world assumption is invalid - the agent cannot automatically infer that any sentence absent from is false. Thus, the agent is forced to represent false facts explicitly - as sentences with the truth value F.
In practice, many sensing actions return exhaustive information which warrants limited or ``local'' closed world information. For example, scanning with a TV camera shows all objects in a room, and the UNIX ls command lists all files in a given directory. After executing ls, it is not enough for the agent to record that paper.tex and proofs.tex are in /tex because, in addition, the agent knows that no other files are in that directory. Note that the agent is not making a closed world assumption. Rather, the agent has executed an action that yields closed world information.
Although the agent now knows that parent.dir(foo,/tex) is false, it is impractical for the agent to store this information explicitly in , since there is an infinite number of such sentences. Instead, the agent represents closed world information explicitly in a meta-level database, , containing sentences of the form LCW() that record where the agent has closed world information. LCW() means that for all variable substitutions , if the ground sentence is true in the world then is represented in . For instance, we represent the fact that contains all the files in /tex with LCW(parent.dir(,/tex)) and that it contains the length of all such files with LCW(parent.dir(,/tex)length(,)).
When asked whether an atomic sentence is true, the agent first checks to see if is in . If it is, then the agent returns the truth value (T or F) associated with the sentence. However, if then could be either F or U (unknown). To resolve this ambiguity, the agent checks whether entails LCW(). If so, is F, otherwise it is U.