First steps

We begin by looking at some very simple sentences:

  1. John will laugh
  2. John laughs
  3. John laughed

To a first approximation, sentence 1 is true just in case there is a point in the future where John laughs, sentence 2 is true if John is now laughing, and sentence 3 is true just in case there was a point in the past where John laughed. Relative to the utterance time U, these sentences make a claim about when the event E of John laughing is supposed to have occured. Sentence 1 claims that it will happen at a time later than U: \(U < E\). Sentence 2 claims that it is happening at U: \(U = E\), and sentence 3 that it happened prior to U: \(U > E\). This motivates a framework for the interpretation of sentences according to which they are true at various times, and the past and future tenses shift the time under consideration back into the past or forwards into the future. This is the setup of temporal logic, where sentences \(\phi\) are evaluated with respect to an utterance time; \(t \models \phi\) is to be understood as \(\phi\) is true at \(t\). The two temporal operators P (for past) and F (for future) shift the time of evaluation:

  • \(t \models \mathbf{P}\phi\) iff \(\exists t’. t’ < t \wedge t’ \models \phi\)

    in words: \(\mathbf{P}\phi\) is true at \(t\) just in case \(\phi\) was true at an earlier time

  • \(t \models \mathbf{F}\phi\) iff \(\exists t’. t < t’ \wedge t’ \models \phi\)

    in words: \(\mathbf{F}\phi\) is true at \(t\) just in case \(\phi\) is true at a later time

An approximation to the meanings of sentences 1, 2, and 3 can then be given as 4, 5, and 6:

  1. \(\mathbf{F}\ (\textsf{laugh}\ \textsf{john})\)
  2. \(\textsf{laugh}\ \textsf{john}\)
  3. \(\mathbf{P}\ (\textsf{laugh}\ \textsf{john})\)

Assuming that propositions are of logical type \(t\), and individuals of type \(e\), the types of the constants in these formulae are as follows:

constant type
john e
laugh et
F tt
P tt

Focussing first on sentence 1 (and its meaning representation in 4), we see that each constant in 4 comes from a different word in 1. Thus whatever the meaning representation we assign to will is, its sole constant is F. Similarly with laugh and laugh, and John and john. In order to determine the precise meaning representation we wish to associate with these words, we must consider in more detail the structures from which the meanings of these sentences are to be computed.

We have two half-way plausible syntactic analyses on hand.

The first analysis

According to the first analysis, the structure of John will laugh involves subject-movement from a lower position.

The lexical features needed to construct this structure are as follows:1

lexeme features
John \(\textsf{d}.\textsf{k}\)
laugh \(\bullet\textsf{d}.\textsf{v}\)
will \(\bullet\textsf{v}.\textsf{k}\bullet.\textsf{s}\)

The basic tree structure of this derivation is as follows: \(\textsf{will} + (\textsf{laugh} + \textsf{John})\). This is the same structure in which the constants for each lexical item naturally combine: \(\mathbf{F}_{tt} + (\textsf{laugh}_{et} + \textsf{john}_{e})\). This structural homomorphism between syntax and semantics allows us to treat merge as having the semantic reflex of simple function application. The semantically annotated lexicon is given below.

lexeme features meaning
John \(\textsf{d}.\textsf{k}\) \(\textsf{john}_{e}\)
laugh \({\bullet}\textsf{d}.\textsf{v}\) \(\textsf{laugh}_{et}\)
will \({\bullet}\textsf{v}.\textsf{k}\bullet.\textsf{s}\) \(\textbf{F}_{tt}\)

As merge is feature driven, we can make the further observation that there is a homomorphism between the feature bundle of a lexical item and its semantic type: the type associated with a feature bundle has as inputs the types of its selection features (in order), and as output the type of its category feature. This correspondance simply skips over features relevant for movement.

feature type
\(\texttt{d}\) \(e\)
\(\texttt{v}\) \(t\)
\(\texttt{s}\) \(t\)

The second analysis

According to the second, more surface oriented, one, the structure of a sentence like John will laugh is roughly as follows:

The lexical features needed to construct this structure are as follows:

lexeme features
John \(\textsf{d}\)
laugh \(\textsf{v}\)
will \({\bullet}\textsf{v}.\textsf{d}{\bullet}.\textsf{s}\)

The basic tree structure of this derivation can be represented with the following term: \((\textsf{will} + \textsf{laugh}) + \textsf{John}\). The linguistic question we must answer is how to obtain the desired meaning of this sentence (in 4). If we pursue the idea from the first analysis of having each merge feature determining the semantic type of a lexical item, we would have that will would take two semantic arguments, and be of type \(\textit{ty}_{v} \rightarrow \textit{ty}_{d} \rightarrow \textit{ty}_{s}\). Assigning the simplest possible meanings to laugh and John, namely \(\textsf{laugh}_{et}\) and \(\textsf{john}_{e}\), we would have the following feature-type correspondance:

feature type
\(\texttt{d}\) \(e\)
\(\texttt{v}\) \(et\)
\(\texttt{s}\) \(t\)

Thus, for the meaning of will, we would need to come up with a term of type \((et)\rightarrow e\rightarrow t\) making use of the constant \(\textbf{F}_{tt}\). The simplest such is \(\lambda P^{et},x^{e}.\textbf{F}_{tt}\ (P\ x)\).

The moral

Both analyses share the property that there is a close relationship between feature bundles and semantic type, which is based on a mapping between individual features and semantic types. This relationship is functional, which means that the feature bundle completely determines the semantic type. This syntax-semantics mapping can be given in the following form:

\begin{align} \textit{ty}(\texttt{c}.\alpha) & = \textit{ty}_\texttt{c} \\ \textit{ty}(\bullet\texttt{c}.\alpha) & = \left\{ \begin{array}{ll} \textit{ty}_\texttt{c} \rightarrow \textit{ty}(\alpha) & \text{if c is a merge feature}\\ \textit{ty}(\alpha) & \text{if c is a move feature} \end{array}\right. \end{align}

Adopting as a working hypothesis that there is in fact a relation between the syntactic feature bundle of an expression and its semantic type, we are able to constrain the possible analyses that we can entertain for any given data set. This is true both in terms of semantics (as in, what a lexical item should mean), as seen in the previous section, and in terms of syntax (as in, what feature bundle a lexical item should have), if we have some idea as to its semantics.


  1. Due to difficulties in typesetting, I will write positive features as \(\bullet \textsf{x}\) regardless of whether they are merge or move features. Similarly, negative features are written as \(\textsf{x}\), regardless of whether they are merge or move features. ↩︎