This website displays experiments we conducted with a non-monotonic reasoning mechanism called logical lateration:

In the Drawing experiment, we let the system generate drawings from natural language description. In order to do this quickly -- and to make sure we generate realizable descriptions, we developed a simple drawing tool that allows us to quickly generate textual descriptions of realizable layouts by drawing layouts. These descriptions (not the drawings) are sent to the server, which applies the logical lateration method to the descriptions creating what it "imagines" the layout may look like. You can try the experiment for yourself: create a layout and check the description. Send it to the server and compare the description of the original with the drawing generated by the system. Did the mechanism generate a correct realization of the description?

In the Text Map experiment, we slightly expanded the language to allow real landmark names, such as Eugene or Skinner's Butte, instead of numbers, such as 1, 2, 3. This allows us to use the system for cognitive science experiments about survey knowledge.

In the Mental Map experiment, we leveraged the system to demonstrate its use for cognitive science experiments. During an undergraduate research project, a student described his mental map of landmarks in Eugene in textual format. We created a map of all locations through logical lateration from his descriptions. As you can see the generated map shows -- apart from the three anchor points at the boundary we used to calculate the linear transformation --, a significant fishlens effect with the university as its focal point.

The Research

Logic -- or language -- and perception are two completely separate realms, correct? Not entirely. We know that we can extract symbols from images via machine learning (ML) techniques. This is what a classifier does. However, the opposite direction so far seemed elusive, although it is, of course, possible to construct software or ontologies that given linguistic or qualitative descriptions of a layout produces an image. However, a human translator needs to construct the software or ontology, while ML is a meta-mechanism for which we also find evidence in natural cognitive systems in the form of neural networks. This has a multitude of ramifications, in particular, we understand the lower levels of cognition -- from percept to symbol, associative thinking -- much better than the higher levels and how they might have evolved. There is a gap between human beings and their closest relatives among the primates. We have, for instance, airplanes, laws, markets, and operas, and what fundamentally and obviously distinguishes us cognitively from our relatives are language and logic. Bridging the gap between logical reasoning and language, on the one hand, and learning and perception, on the other hand, is therefore a key effort for AI and Cognitive Systems research, in general.

One way to bridge the gap is, trivially, if there is none. We can extract linguistic symbols from input images via classification. Can we extract images from logical formulae? Is there a systematic relationship between a sentence such as "A is north of B" and a map that depicts A and B? This project studies a candidate for such a mechanism called "Logical Lateration". Logical Lateration is a purely logical reasoning mechanism that converts formulae into a logical format that has analogous properties, i.e., that can be drawn. Sounds strange? Here is how it works in a nutshell. Assume we can represent relational statements in a simple propositional logic format (and the detailed explanation for this requires some understanding of equivalences between logical languages): "A is north of B" can be logically represented as "B ⋀ N → A." This is intuitively similar to the natural language statement in the sense that the verb "is" is separating the two operands of "→", and "B ⋀ N" is expressing the propositional phrase "north of B." This formula has the truth table shown below:

N A B B ⋀ N → A A ⋀ N B ⋀ N CN(A) CN(B)
0 0 0 1 0 0 0 0
0 0 1 1 0 0 0 0
0 1 0 1 0 0 0 0
0 1 1 1 0 0 0 0
1 0 0 1 0 0 0 0
1 0 1 0 0 1 0 0
1 1 0 1 1 0 1 0
1 1 1 1 1 1 1 1
sum: 2 sum: 1

The truth table shows which truth value a logical formula obtains given the truth values for its components. This is the standard method to determine the semantics, i.e., meaning, of a propositional logic formula (cf. the Wikipedia page https://en.wikipedia.org/wiki/Truth_table to learn more about truth tables).

In order to obtain the north-coordinate for A, logical lateration first calculates the truth values of the formula CN(A) = (B ⋀ N → A) ⋀ (A ⋀ N) and then computes the sum over all entries. The north-coordinate for A is two. Accordingly, the formula CN(B) = (B ⋀ N → A) ⋀ (B ⋀ N) yields a north-coordinate of 1 for B. A has a larger north-coordinate than B.

As this tiny example shows, truth tables very quickly get very large. The simple example of three regions and two directions already takes 2^5 = 32 rows. However, human cognition also is severely restricted with respect to the number of items we can retain in working memory at a time, and employs sophisticated compression strategies. This is thus, from a cognitive systems perspective, not a downside.

To learn more, read my forthcoming paper "Logical Lateration – a Cognitive Systems Experiment Towards a New Approach to the Grounding Problem" in Cognitive Systems Research (Elsevier) and explore the experiments on this site for yourself.