From Dependencies to Constituents
We have formulated the ‘essence of syntax’ as constructing dependencies between heads, and we have implemented it in terms of feature checking. At the end of the day, we end up with a dependency structure (a rooted acyclic graph over heads). Although the way I specified this dependency structure was via a derivation, a little thought might lead you to suspect that we could have just said, a well-formed dependency structure is one that looks like this (with the ‘this’ being some statement about the final dependency structure), and in fact, this is true.
The way things are usually presented in minimalist syntax involves trees with co-indexation, or multiple dominance structures.1, 2
Copies | Multiple Dominance |
---|---|
![]() |
![]() |
If you take a dependency structure of the kind that we’ve been drawing, as shown below, we can turn it systematically into a multiple dominance structure. (Or any of the other more familiar structures.) We do this by, for each lexical item, turning the positive dependencies into projections of the head
Figure 1: A dependency structure for the question “which mango ripenend?”
We work from left to right in our dependency structure. We begin with the C head. Each positive feature gets its own node. We create three nodes, one for the head, one for the first positive feature \(\bullet t\), and one for the second negative feature \(\bullet w\) and the remaining negative features. These node are linked to each other (with an edge). These nodes and edges are colored blue, so as to indicate what has been done.
Next, we consider the head ripened. We create two nodes, one for the head, and one for the first and only positive feature \(\bullet d\) together with the negative feature. These nodes are again connected to one another, and are colored red for easy visual identification.
We now turn to the next head, which. There is just one positive feature \(\bullet n\), which gets its own node together with the negative features. This new node is connected to the head, and both are colored purple.
Finally, we come to the last head, mango. There are no positive features, so no new nodes are created. We color the mango nodes green anyways.
This might still not look very familiar to our jaded eyes. However, if we rotate it \(270^\circ\) around its center, eliminate the features, and rename the nodes in a more familiar way, we arrive at the following.3
This structure was already contained in our starting dependency structure. All we did was rearrange the information contained in the dependency structure to present it differently. Comparing our two representations, we can conclude that the constituency structure represents each lexical item with \(k+1\) nodes, where \(k\) is the number of positive features that lexical item has, whereas the dependency representation has just one node per lexical item. Each node beyond the first in the projection of a lexical item ‘hosts’ exactly one of its positive features. Whereas the dependency representation forces us to keep trach of the order of the dependencies a lexical items positive features enter into, this order information is encoded in a constituency tree via the dominance relation. Note that, of course, we can go from a multiple dominance structure of this sort back to a dependency structure by ‘squishing together’ the nodes projecting from the same head. We need to remember to encode the immediate dominance relations among these nodes as precedence relations in the lexical feature bundles.
Homework4
We’ve been having some interesting discussions in the comments about relative clauses and agreement. I understand them as questions about how to analyze relative clauses. Let me flip the questions around and ask you to come up with an analysis of relative clauses. This may be Kayne’s analysis, de Vries' analysis, your own novel analysis, it doesn’t matter. Please make sure that it works! This means, concretely, that
- you can write down lexical items
- these lexical items allow you to assemble correct dependency structures
- which ‘unfold’ to the constituency structures you want
- these lexical items don’t allow you to assemble wacky things we don’t want
I would suggest starting out with the modest goal of deriving modified noun phrases (i.e. [DP NP Rel]), which as DPs can be the arguments of predicates.
If this works, try next to determine which heads need to agree with which other heads (in the Lithuanian examples), and specify the paths along which this is possible.
-
Everyone nods however to Chomsky’s ‘set’ notation, whereby you have an operation
Merge
, which takes two objects \(\alpha\) and \(\beta\) and combines them to form the set \(\{\alpha, \beta\}\). (This is a miserable notation with nothing to recommend it.) If the two objects are distinct, you have ‘external merge,’ but if one is contained within the other you have ‘internal merge.’ ↩︎ -
The foundational reference to multi-dominant syntax in minimalism is Gärtner’s book. ↩︎
-
And if we use the same graphics drawing program, we arrive at the following:
↩︎
-
All homeworks are part of your ‘portfolio.’ The portfolio can be turned in at the end of the semester, although it is of course simpler to do the assignments as they come, as opposed to putting it off until then. ↩︎