Publications
[41] 
Gregory M. Kobele.
The cooper storage idiom.
Journal of Logic, Language and Information, to appear.
[ DOI 
pdf 
code ]
Cooper storage is a widespread technique for associating sentences with their meanings, used (sometimes implicitly) in diverse linguistic and computational linguistic traditions. This paper encodes the data structures and operations of cooper storage in the simply typed linear λcalculus, revealing the rich categorical structure of a graded applicative functor. In the case of finite cooper storage, which corresponds to ideas in current transformational approaches to syntax, the semantic interpretation function can be given as a linear homomorphism acting on a regular set of trees, and thus generation can be done in polynomial time.

[40] 
Gregory M. Kobele.
Actual language use and competence grammars.
Theoretical Linguistics, 42(34):277290, 2016.
[ DOI 
pdf ]
This is a commentary on an article by Ruth Kempson et al. motivating dynamic syntax. I clarify their position against competence/performance and other distinctions, and argue that 1. incremental interpretation comes `for free' with compositional semantic interpretation, and 2. traditional `competence' grammars can account for split utterances in dialogue in exactly the way proposed by Kempson et al.

[39] 
Gregory M. Kobele and Jason Merchant.
The dynamics of elllipsis.
Theoretical Linguistics, 42(34):291296, 2016.
[ DOI 
pdf ]
This is a commentary on an article by Ruth Kempson et al. motivating dynamic syntax. We claim that the Dynamic Syntax approach to ellipsis is an instance of one which is common to many frameworks. We describe Kobele's (2015) account (which is in the same vein), and suggest that mismatches in ellipsis will prove difficult for the DS variant.

[38] 
Alexander Clark, Makoto Kanazawa, Gregory M. Kobele, and Ryu Yoshinaka.
Distributional learning of some nonlinear tree grammars.
Fundamenta Informaticae, 146(4):339377, 2016.
[ DOI 
pdf ]
A key component of Clark and Yoshinaka's distributional learning algorithms is the extraction of substructures and contexts contained in the input data. This problem often becomes intractable with nonlinear grammar formalisms due to the fact that more than polynomially many substructures and/or contexts may be contained in each object. Previous works on distributional learning of nonlinear grammars avoided this difficulty by restricting the substructures or contexts that are made available to the learner. In this paper, we identify two classes of nonlinear tree grammars for which the extraction of substructures and contexts can be performed in polynomial time, and which, consequently, admit successful distributional learning in its unmodified, original form.

[37] 
Gregory M. Kobele.
LFcopying without LF.
Lingua, 166, part B:236259, 2015.
[ DOI 
pdf ]
A copying approach to ellipsis is presented, whereby the locus of copying is not a level of derived syntactic structure (LF), but rather the derivation itself. The ban on preposition stranding in sprouting follows without further stipulation, and other, seemingly structure sensitive, empirical generalizations about elliptical constructions, including the preposition stranding generalization, follow naturally as well. Destructive operations which `repair' nonidentical antecedents are recast in terms of exact identity of derivations with parameters. In the context of a compositional semantic interpretation scheme, the derivational copying approach to ellipsis presented here is revealed to be a particular instance of a proform theory, thus showing that the distinctions between, and arguments about, syntactic and semantic theories of ellipsis need to be revisited.

[36] 
Gregory M. Kobele and Sylvain Salvati.
The IO and OI hierarchies, revisited.
Information and Computation, 243:205221, 2015.
[ DOI 
pdf ]
We study languages of λterms generated by IO and OI unsafe grammars. These languages can be used to model meaning representations in the formal semantics of natural languages following the tradition of Montague. Using techniques pertaining to the denotational semantics of the simply typed λcalculus, we show that the emptiness and membership problems for both types of grammars are decidable. In the course of the proof of the decidability results for OI, we identify a decidable variant of the λdefinability problem, and prove a stronger form of Statman's finite completeness Theorem.

[35] 
John Case, Jeffrey Heinz, and Gregory M. Kobele.
Interpreted learning: A framework for investigating the contribution
of various information sources to the learning problem.
In Carson T. Schütze and Linnaea Stockall, editors,
Connectedness: Papers by and for Sarah VanWagenen, volume 18 of
UCLA Working Papers in Linguistics, pages 90101. UCLA, 2014.
[ pdf 
.html ]
Natural language utterances presents learners with more information about the underlying grammar than is typically encoded in the orthographic string. While multiple information sources such as prosody, and semantics can be encoded as a single object, allowing the results of typical learning frameworks to apply, this coding obscures the question of exactly how the learner can draw inferences about a single object from information made available by these multiple different perspectives. Here we tease apart the contribution of different information sources to the learning problem by generalizing Gold's learning paradigm. The main result is a proof that multiple sources of information can interact synergistically to facilitate learning of the target underlying grammar.

[34] 
Gregory M. Kobele.
Meeting the boojum.
Theoretical Linguistics, 40(12):165173, 2014.
[ DOI 
pdf ]
This is a commentary on an article by Müller and Wechsler comparing lexical and phrasal approaches to argument structure. I claim that the basic distinction they are making between lexical and phrasal approaches is not well defined, that their arguments are circular, and that their complexity metric is unmotivated.

[33] 
Makoto Kanazawa, Gregory M. Kobele, Jens Michaelis, Sylvain Salvati, and Ryo
Yoshinaka.
The failure of the strong pumping lemma for multiple contextfree
languages.
Theory of Computing Systems, 55(1):250278, 2014.
[ DOI 
pdf ]
Seki et al. (“On multiple contextfree grammars”, Theoretical Computer Science 88 (1991)) showed that every mmultiple contextfree language L is weakly 2miterative. Whether every mmultiple contextfree language L is 2miterative has been open. We show that there is a 3multiple contextfree language that is not kiterative for any k.

[32] 
Gregory M. Kobele and Sylvain Salvati.
The IO and OI hierarchies revisited.
In Fedor V. Fomin, Rusins Freivalds, Marta Kwiatkowska, and David
Peleg, editors, Automata, Languages, and Programming, volume 7966 of
Lecture Notes in Computer Science, pages 336348. Springer, Berlin,
2013.
[ DOI ]
Superseded by [36]

[31] 
Gregory M. Kobele, Sabrina Gerth, and John T. Hale.
Memory resource allocation in topdown minimalist parsing.
In Glyn Morrill and MarkJan Nederhof, editors, FG 2012/2013,
volume 8036 of Lecture Notes in Computer Science, pages 3251.
Springer, Heidelberg, 2013.
[ DOI 
pdf ]
This paper provides a linking theory between the minimalist grammar formalism and offline behavioural data. We examine the transient stack states of a topdown parser for Minimalist Grammars as it analyzes embedded sentences in English, Dutch and German. We find that the number of time steps that a derivation tree node persist on the parser's stack derives the observed contrasts in English center embedding, and the difference between German and Dutch embedding. This particular stack occupancy measure formalizes the leading idea of “memory burden” in a way that links predictive, incremental parsing to specific syntactic analyses.

[30] 
Gregory M. Kobele.
Idioms and extended transducers.
In Proceedings of the 11th International Workshop on Tree
Adjoining Grammars and Related Formalisms (TAG+11), pages 153161, Paris,
France, September 2012.
[ pdf 
html ]
There is a tension between the idea that idioms can be both listed in the lexicon, and the idea that they are themselves composed of the lexical items which seem to inhabit them in the standard way. In other words, in order to maintain the insight that idioms actually contain the words they look like they contain, we need to derive them syntactically from these words. However, the entity that should be assigned a special meaning is then a derivation, which is not the kind of object that can occur in a lexicon (which is, by definition, the atoms of which derivations are built), and thus not the kind of thing that we are able to assign meanings directly to. Here I show how to resolve this tension in an elegant way, one which bears striking similarities to those proposed by psychologists and psycholinguists working on idioms.

[29] 
Gregory M. Kobele and Jens Michaelis.
On the formmeaning relations definable by CoTAGs.
In Proceedings of the 11th International Workshop on Tree
Adjoining Grammars and Related Formalisms (TAG+11), pages 207213, Paris,
France, September 2012.
[ pdf 
html ]
Barker notes that from a purely syntactic perspective, in the context of simple (i.e. not multicomponent) TAGs, adding cosubstitution affects neither weak nor strong generative capacity (in the sense of derived string and tree languages). Clearly, however, something is different: after adding the operation of cosubstitution, derivational order matters in the sense that one derived syntactic representation can potentially be associated with more than one simultaneously derived semantic representation. As Barker points out, the introduction of the cosubstitution operator allows for a straightforward adaption of the notion of derivation tree such that two derivation trees can be different depending on when a cosubstitution step takes place. We demonstrate that the formmeaning mappings definable by coTAGs go beyond those of “simple” STAGs (Shieber, 1994; Shieber, 2006). In particular, the set of meanings, the second projection of the synchronously derived syntactic and semantic representations, can be  up to a homomorphism abstracting away from instances of λ and variables  the nontree adjoining language MIX(k), for any k >=3.

[28] 
Gregory M. Kobele.
Eliding the derivation: A minimalist formalization of ellipsis.
In Stefan Müller, editor, Proceedings of the 19th
International Conference on HeadDriven Phrase Structure Grammar, Chungnam
National University Daejeon, pages 307324, Stanford, 2012. CSLI
Publications.
[ pdf 
html ]
In this paper I use the formal framework of minimalist grammars to implement a version of the traditional approach to ellipsis as `deletion under syntactic (derivational) identity,' which, in conjunction with canonical analyses of voice phenomena, immediately allows for voice mismatches in verb phrase ellipsis, but not in sluicing. This approach to ellipsis is naturally implemented in a parser by means of threading a state encoding a set of possible antecedent derivation contexts through the derivation tree. Similarities between ellipsis and pronominal resolution are easily stated in these terms. In the context of this implementation, two approaches to ellipsis in the transformational community are naturally seen as equivalent descriptions at different levels: the LFcopying approach to ellipsis resolution is best seen as a description of the parser, whereas the phonological deletion approach a description of the underlying relation between form and meaning

[27] 
Gregory M. Kobele.
Deriving reconstruction asymmetries.
In Artemis Alexiadou, Tibor Kiss, and Gereon Müller, editors,
Local Modeling of NonLocal Dependencies in Syntax, volume 547 of
Linguistische Arbeiten, pages 477500. de Gruyter, Berlin, 2012.
[ pdf 
html ]
There appears to be a systematic difference in the reconstructability of noun phrases and predicates. In this paper I show that reconstructing the A/Abar distinction in terms of slashfeature percolation and movement allows for a simple derivational formulation of the principles of binding and scope which derives a generalization very much along the lines of the one presented by Huang (1993).

[26] 
Gregory M. Kobele and Malte Zimmermann.
Quantification in german.
In Edward L. Keenan and Denis Paperno, editors, Handbook of
Quantifiers in Natural Language, volume 90 of Studies in Linguistics
and Philosophy, chapter 5, pages 227283. Springer, Berlin, 2012.
[ DOI 
pdf ]
This paper is a systematic survey of the linguistic expression of quantification in German.

[25] 
Gregory M. Kobele.
Importing montagovian dynamics into minimalism.
In Denis Béchet and Alexandre Dikovsky, editors, Logical
Aspects of Computational Linguistics, volume 7351 of Lecture Notes in
Computer Science, pages 103118, Berlin, 2012. Springer.
[ DOI 
pdf ]
Minimalist analyses typically treat quantifier scope interactions as being due to movement, thereby bringing constraints thereupon into the purview of the grammar. Here we adapt De Groote's continuationbased presentation of dynamic semantics to minimalist grammars. This allows for a simple and simply typed compositional interpretation scheme for minimalism

[24] 
Gregory M. Kobele and Jens Michaelis.
CoTAGs and ACGs.
In Denis Béchet and Alexandre Dikovsky, editors, Logical
Aspects of Computational Linguistics, volume 7351 of Lecture Notes in
Computer Science, pages 119134, Berlin, 2012. Springer.
[ DOI 
pdf ]
Our main concern is to provide a complete picture of how coTAGs, as a particular variant within the general framework of tree adjoining grammars (TAGs), can be captured under the notion of abstract categorial grammars (ACGs). coTAGs have been introduced by Barker as an “alternative conceptualization” in order to cope with the tension between the TAGmantra of the “locality of syntactic dependencies” and the seeming nonlocality of quantifier scope. We show how our formalization of Barker's proposal leads to a class of higher order ACGs. By taking this particular perspective, Barker's proposal turns out as a straightforward extension of the proposal of Pogodalla, where the former in addition to “simple” inverse scope phenomena also captures inverse linking and noninverse linking phenomena.

[23] 
Gregory M. Kobele.
Ellipsis: Computation of.
WIREs Cognitive Science, 3(3):411418, 2012.
[ DOI 
pdf ]
A computational account of ellipsis should specify not only how the meaning of an elliptical sentence is computed in context, but also a description of what is being computed. Many proposals can be divided into two groups, as per whether they compute the meaning of an elliptical sentence based on the semantic or the syntactic parts of its context. A unifying theme of these proposals is that they are all based on the idea that the meaning of an elliptical sentence is determinable based on a structured representation which is transformationally related to its surface syntactic structure.

[22] 
Gregory M. Kobele and Jens Michaelis.
Disentangling notions of specifier impenetrability: Late adjunction,
islands, and expressive power.
In Makoto Kanazawa, András Kornai, Marcus Kracht, and Hiroyuki
Seki, editors, The Mathematics of Language, volume 6878 of Lecture
Notes in Computer Science, pages 126142. Springer, Berlin, 2011.
[ DOI 
pdf ]
In this paper we investigate the weak generative capacity of minimalist grammars with late adjunction. We show that by viewing the Specifier Island Condition as the union of three separate constraints, we obtain a more nuanced perspective on previous results on constraint interaction in minimalist grammars, as well as the beginning of a map of the interaction between late adjunction and movement constraints. Our main result is that minimalist grammars with the SpIC on movement generated specifiers only and with the Shortest Move Constraint, in conjunction with late adjunction, can define languages whose intersection with an appropriate regular language is not semilinear.

[21] 
Christina S. Kim, Gregory M. Kobele, Jeffery T. Runner, and John T. Hale.
The acceptability cline in VP ellipsis.
Syntax, 14(4):318354, 2011.
[ DOI 
pdf ]
This paper lays the foundations for a processing model of relative acceptability levels in verb phrase ellipsis (VPE). In the proposed model, mismatching VPE examples are grammatical but less acceptable because they violate heuristic parsing strategies. This analysis is presented in a Minimalist Grammar formalism that is compatible with standard parsing techniques. The overall proposal integrates computational assumptions about parsing with a psycholinguistic linking hypothesis. These parts work together with the syntactic analysis to derive novel predictions that are confirmed in a controlled experiment.

[20] 
Gregory M. Kobele.
Minimalist tree languages are closed under intersection with
recognizable tree languages.
In Sylvain Pogodalla and JeanPhilippe Prost, editors, LACL
2011, volume 6736 of Lecture Notes in Artificial Intelligence, pages
129144, Berlin, 2011. Springer.
[ DOI 
pdf ]
Minimalist grammars are a mildly contextsensitive grammar framework within which analyses in mainstream chomskyian syntax can be faithfully represented. Here it is shown that both the derivation tree languages and derived tree languages of minimalist grammars are closed under intersection with regular tree languages. This allows us to conclude that taking into account the possibility of `semantic crashes' in the standard approach to interpreting minimalist structures does not alter the strong generative capacity of the formalism. In addition, the addition to minimalist grammars of complexity filters is easily shown using a similar proof method to not change the class of derived tree languages.

[19] 
Gregory M. Kobele.
A formal foundation for A and Abar movement in the minimalist
program.
In Christian Ebert, Gerhard Jäger, and Jens Michaelis, editors,
MOL 10/11, volume 6149 of Lecture Notes in Computer Science,
pages 145159. Springer, Berlin, 2010.
[ DOI 
pdf ]
It seems a fact that movement dependencies come in two flavours: “A” and “Abar”. Over the years, a number of apparently independent properties have been shown to cluster together around this distinction. However, the basic structural property relating these two kinds of movement, the ban on improper movement (`once you go bar, you never go back'), has never been given a satisfactory explanation. Here, I propose a timingbased account of the A/Abar distinction, which derives the ban on improper movement, and allows for a simple and elegant account of some of their differences. In this account, “A” dependencies are those which are entered into before an expression is first merged into a structure, and “Abar” dependencies are those an expression enters into after having been merged. The resulting system is mildly contextsensitive, providing therefore a restrictive account of possible human grammars, while remaining expressive enough to be able to describe the kinds of dependencies which are thought to be manifest.

[18] 
Gregory M. Kobele.
Without remnant movement, MGs are contextfree.
In Christian Ebert, Gerhard Jäger, and Jens Michaelis, editors,
MOL 10/11, volume 6149 of Lecture Notes in Computer Science,
pages 160173. Springer, Berlin, 2010.
[ DOI 
pdf ]
Minimalist grammars offer a formal perspective on a popular linguistic theory, and are comparable in weak generative capacity to other mildly context sensitive formalism. Minimalist grammars allow for the straightforward definition of socalled remnant movement constructions, which have found use in many linguistic analyses. It has been conjectured that the ability to generate this kind of configuration is crucial to the supercontextfree expressivity of minimalist grammars. This conjecture is here proven.

[17] 
Gregory M. Kobele.
Inverse linking via function composition.
Natural Language Semantics, 18(2):183196, 2010.
[ DOI 
pdf ]
The phenomenon of Inverse Linking has proven challenging for theories of the syntaxsemantics interface; a noun phrase within another behaves with respect to binding as though it were structurally independent. In this paper I show that, using an LFmovement style approach to the syntaxsemantics interface, we can derive all and only the appropriate meanings for such constructions using no semantic operations other than function application and composition. The solution relies neither on a proliferation of lexical ambiguity nor on abandoning the idea that pronouns denote variables, but rather on a straightforward (and standard) reification of assignment functions, which allows us to define abstraction operators within our models.

[16] 
Jeffrey Heinz, Gregory M. Kobele, and Jason Riggle.
Evaluating the complexity of Optimality Theory.
Linguistic Inquiry, 40(2):277288, 2009.
[ DOI 
pdf ]
Idsardi (2006) claims that Optimality Theory (OT; Prince and Smolensky 1993/2004) is “in general computationally intractable” on the basis of a proof adapted from Eisner (1997a). We take issue with this conclusion on two grounds. First, the intractability result holds only in cases where the constraint set is not fixed in advance (contra usual definitions of OT) and second, the result crucially depends on a particular representation of OT grammars. We show that there is an alternative representation of OT grammars that allows for efficient computation of optimal surface forms and provides deeper insight into the sources of complexity of Optimality Theory. We conclude that it is a mistake to reject Optimality Theory on the grounds that it is computationally intractable.

[15] 
Gregory M. Kobele.
Syntactic identity in survive minimalism: Ellipsis and the
derivational identity hypothesis.
In Michael T. Putnam, editor, Towards a derivational syntax:
Surviveminimalism. John Benjamins, Amsterdam, 2009.
[ pdf 
html ]
Over the years, a number of counterexamples to the hypothesis that ellipsis resolution is mediated via syntactic identity have been identified. However, in the same time evidence which seems to require comparison of syntactic structures in ellipsis resolution has also been unearthed. On top of this empirical puzzle, survive minimalism places an additional theoretical constraint: syntactic structures, once assembled, are opaque to further search or manipulation. In this paper, I show that a simple perspective shift allows us both to view the purported counterexamples as providing glimpses into the nature of the operations which build syntactic structure, and to satisfy the theoretical constraints imposed by survive minimalism's derivational take on syntactic structure.

[14] 
Gregory M. Kobele.
Acrosstheboard extraction in minimalist grammars.
In Proceedings of the Ninth International Workshop on Tree
Adjoining Grammar and Related Formalisms (TAG+9), pages 113128, 2008.
[ pdf ]
Minimalist grammars cannot provide adequate descriptions of constructions in which a single filler saturates two mutually independent gaps, as is commonly analyzed to be the case in parasitic gap constructions and other acrosstheboard extraction phenomena. In this paper, I show how a simple addition to the minimalist grammar formalism allows for a unified treatment of control and parasitic gap phenomena, and can be restricted in such a way as to account for acrosstheboard exceptions to the coordinate structure constraint. In the context of standard constraints on movement, the weak generative capacity of the formalism remains unaffected.

[13] 
Gregory M. Kobele.
Agreement bottlenecks in Italian.
In Claudia Casadio and Joachim Lambek, editors, Computational
Algebraic Approaches to Natural Language, pages 191212. Polimetrica,
Milan, 2008.
[ pdf 
html ]
This paper follows a progression of pregroup analyses of agreement in the Italian DP as they are successively modified so as to express more sophisticated relationships between overt expressions of agreement. The desire to state our intuitions about the data directly in the object language of the theory will be seen to put pressures on the underlying combinatory system that the types will be unable to accommodate. Allowing more expressive types (while holding constant the underlying calculus) will alleviate some of the pressure put on the combinatory system, and allow us to capture certain generalizations about relations between paradigms that are out of the reach of previous analyses. In particular, it will be shown that certain kinds of `metaparadigmatic' phenomena [Bobaljik, 2002] are statable without additional stipulation in our setting.

[12] 
Gregory M. Kobele, Christian Retoré, and Sylvain Salvati.
An automata theoretic approach to minimalism.
In James Rogers and Stephan Kepser, editors, Proceedings of the
Workshop ModelTheoretic Syntax at 10; ESSLLI '07, Dublin, 2007.
[ pdf 
html ]
We show in this paper how, given a minimalist grammar G, to construct a simple, regular, characterization of its well formed derivations. We obtain both a bottomup and a topdown characterization of the function from minimalist derivations to derived trees. The same construction extends to minimalist grammars with copying. In other words, the structure languages generated by minimalist grammars with (without) copying are contained in the output languages of (finite copying) tree homomorphisms. Compositionality is naturally formulated as a transduction mapping derivation trees to (terms denoting) semantic values. The compositional semantics for minimalist grammars introduced in Kobele (2006) is naturally expressed in terms of a transduction of the same type as that mapping derivations to derived trees. We present a general method of synchronizing (in the sense of Shieber (1994)) multiple transductions over the same derivation, showing as a result that the formmeaning relations definable by MGs interpreted as per Kobele (2006) can be described as bimorphisms of type B(M,M).

[11] 
Gregory Kobele and Harold Torrence.
Intervention and focus in Asante Twi.
In Ines Fiedler and Anne Schwarz, editors, Papers on Information
Structure in African Languages, volume 46(12) of ZAS Papers in
Linguistics, pages 161184. ZAS, Berlin, 2006.
[ pdf 
html ]
This paper concerns the distribution of whwords in Asante Twi, which has both a focus fronting strategy and an insitu strategy. We show that the focusing and the insitu constructions are not simply equally available options. On the contrary, there are several cases where the focusing strategy must be used and the insitu strategy is ungrammatical. We show that the cases in Asante Twi are “intervention effects”, which are attested in other languages, like German, Korean, and French. We identify a core set of intervening elements that all of these languages have and discuss their properties.

[10] 
Gregory M. Kobele and Marcus Kracht.
Pregroup grammars are turing complete.
In Aviad Eilam, Tatjana Scheffler, and Joshua Tauberer, editors,
Proceedings of the 29th Pennsylvania Linguistics Colloquium, volume 12 of
University of Pennsylvania Working Papers in Linguistics, pages
189198, 2006.
[ pdf 
html ]
Pregroups were introduced in (Lambek, 1999), and provide a foundation for a particularly simple syntactic calculus. Buszkowski (2001) showed that free pregroup grammars generate exactly the εfree contextfree languages. Here we characterize the class of languages generable by all pregroups, which will be shown to be the entire class of recursively enumerable languages. To show this result, we rely on the wellknown representation of recursively enumerable languages as the homomorphic image of the intersection of two contextfree languages (Ginsburg et al., 1967). We define an operation of crossproduct over grammars (socalled because of its behaviour on the types), and show that the crossproduct of any two freepregroup grammars generates exactly the intersection of their respective languages. The representation theorem applies once we show that allowing `empty categories' (i.e. lexical items without overt phonological content) allows us to mimic the effects of any string homomorphism.

[9] 
Gregory M. Kobele.
Features moving madly: A formal perspective on feature percolation in
the minimalist program.
Research on Language and Computation, 3(4):391410, 2005.
[ DOI 
pdf ]
I show that adding a mechanism of feature percolation (via specifier head agreement) to Minimalist Grammars (MGs) [Stabler, 1997] takes them out of the class of contextsensitive grammar formalisms. The main theorem of the paper is that adding a mechanism of feature percolation to MGs allows them to implement infinite abaci [Lambek, 1961], which can simulate any Turing Machine computation. As a simple corollary, I show that, for any computable function f over natural numbers, MGs thus enhanced can generate the unary language corresponding to the range of f.

[8] 
Gregory M. Kobele and Jens Michaelis.
Two type 0variants of minimalist grammars.
In Gerhard Jäger, Paola Monachesi, Gerald Penn, James Rogers, and
Shuly Wintner, editors, Proceedings of the 10th conference on Formal
Grammar and the 9th Meeting on Mathematics of Language, Stanford, 2009. CSLI
Online Publications.
[ pdf 
html ]
Minimalist grammars (Stabler 1997) capture some essential ideas about the basic operations of sentence construction in the Chomskyian syntactic tradition. Their affinity with the unformalized theories of working linguists makes it easier to implement and thereby to better understand the operations appealed to in neatly accounting for some of the regularities perceived in language. Here we characterize the expressive power of two, apparently quite different, variations on the basic minimalist grammar framework, gotten by either adding a mechanism of `feature percolation' (Kobele, 2005), or, instead of adding a central constraint on movement (the `specifier island condition', Stabler 1999), using it to replace another one (the `shortest move condition', Stabler 1997, 1999) (Gärtner and Michaelis, 2005). We demonstrate that both variants have equal, unbounded, computing power by showing how each can simulate straightforwardly a 2counter automaton.

[7] 
Yoosook Lee, Travis C. Collier, Gregory M. Kobele, Edward P. Stabler, and
Charles E. Taylor.
Grammar structure and the dynamics of language evolution.
In Mathieu S. Capcarrere, Alex A. Freitas, Peter J. Bentley, Colin G.
Johnson, and Jon Timmis, editors, Advances in Artificial Life, volume
3630 of Lecture Notes in Computer Science, pages 624633. Springer,
Berlin, 2005.
Proceedings of the 8th European Conference, ECAL 2005.
[ DOI 
pdf ]
The complexity, variation, and change of languages make evident the importance of representation and learning in the acquisition and evolution of language. For example, analytic studies of simple language in unstructured populations have shown complex dynamics, depending on the fidelity of language transmission. In this study we extend these analysis of evolutionary dynamics to include grammars inspired by the principles and parameters paradigm. In particular, the space of languages is structured so that some pairs of languages are more similar than others, and mutations tend to change languages to nearby variants. We found that coherence emerges with lower learning fidelity than predicted by earlier work with an unstructured language space.

[6] 
Gregory M. Kobele, Jason Riggle, Richard Brooks, David Friedlander, Charles
Taylor, and Edward Stabler.
Induction of prototypes in a robotic setting using local search
MDL.
In Masanori Sugisaka and Hiroshi Tanaka, editors, Proceedings of
the Ninth International Symposium on Artificial Life and Robotics, pages
482485, 2004.
[ pdf ]
Categorizing objects sets the stage for more advanced interactions with the environment. Minimum Description Length learning provides a framework in which to investigate processes by which concept learning might take place. Importantly, the concepts so acquired can be viewed as having a prototype structure  the concepts may apply to one object better than to another. We ground our discussion in a realworld setting  objects to categorize are sensor readings of the behaviours of two mobile robots.

[5] 
Gregory M. Kobele, Jason Riggle, Travis C. Collier, Yoosook Lee, Ying Lin, Yuan
Yao, Charles E. Taylor, and Edward P. Stabler.
Grounding as learning.
In Simon Kirby, editor, Proceedings of the Workshop/Course on
Language Evolution and Computation, ESSLLI `03, pages 8794, Vienna, 2003.
[ pdf 
html ]
Communication among agents requires (among many other things) that each agent be able to identify the semantic values of the generators of the language. This is the “grounding” problem: how do agents with different cognitive and perceptual experiences successfully converge on common (or at least sufficiently similar) meanings for the language? There are many linguistic studies of how human learners do this, and also studies of how this could be achieved in robotic contexts (e.g., (Steels, 1996; Kirby, 1999)). These studies provide insight, but few of them characterize the problem precisely. In what range of environments can which range of languages be properly grounded by distributed agents? This paper takes a first step toward bringing the tools of formal language theory to bear on this problem. In the first place, these tools easily reveal a number of grounding problems which are simply unsolvable with reasonable assumptions about the evidence available, and some problems that can be solved. In the second place, these tools provide a framework for exploring more sophisticated grounding strategies (Stabler et al., 2003). We explore here some preliminary ideas about how hypotheses about syntactic structure can interact with hypotheses about grounding in a fruitful way to provide a new perspective on the emergence of recursion in language. Simpler grounding methods look for some kind of correlation between the mere occurrence of particular basic generators and semantic elements, but richer hypotheses about relations among the generators themselves can provide valuable additional constraints on the problem.

[4] 
Gregory M. Kobele, Travis C. Collier, Charles E. Taylor, and Edward P. Stabler.
Learning mirror theory.
In Proceedings of the Sixth International Workshop on Tree
Adjoining Grammars and Related Frameworks (TAG+6), Venice, 2002.
[ pdf ]
Here we investigate the learnability of classes of mirrortheoretic grammars from dependency structures, which show relations among the lexical items in a sentence, information which is, at least in many cases, plausibly available to the language learner (surface order, morphological decomposition and affixation and selection relations). We define conditions under which this is possible. Adapting a technique familiar from (Kanazawa, 1998) and others, we show that if the lexical ambiguity in target grammars is restricted, this can provide a basis for generalization from the finite sample. Our results show that the class of mirror theoretic grammars in which every lexical item has a unique phonetic string (the 1rigid grammars) is identifiable in the limit from any text of dependency structures.

[3] 
Edward P. Stabler, Travis C. Collier, Gregory M. Kobele, Yoosook Lee, Ying Lin,
Jason Riggle, Yuan Yao, and Charles E. Taylor.
The learning and emergence of mildly context sensitive languages.
In Wolfgang Banzhaf, Thomas Christaller, Peter Dittrich, Jan T. Kim,
and Jens Ziegler, editors, Advances in Artificial Life, volume 2801 of
Lecture Notes in Computer Science, pages 525534. Springer, Berlin,
2003.
[ DOI 
pdf ]
This paper describes a framework for studies of the adaptive acquisition and evolution of language, with the following components: language learning begins by associating words with cognitively salient representations (“grounding”); the sentences of each language are determined by properties of lexical items, and so only these need to be transmitted by learning; the learnable languages allow multiple agreements, multiple crossing agreements, and reduplication, as mildly context sensitive and human languages do; infinitely many different languages are learnable; many of the learnable languages include infinitely many sentences; in each language, inferential processes can be defined over succinct representations of the derivations themselves; the languages can be extended by innovative responses to communicative demands. Preliminary analytic results and a robotic implementation are described.

[2] 
Gregory M. Kobele.
Formalizing mirror theory.
Grammars, 5(3):177221, 2002.
[ DOI 
pdf ]
Mirror theory is a theory of (morpho) syntax introduced in (Brody, 1997). Here I present a formalization of the theory, and study some of its language theoretic properties.

[1] 
Kyubum Wee, Travis C. Collier, Gregory M. Kobele, Edward P. Stabler, and
Charles E. Taylor.
Natural language interface to an intrusion detection system.
In Proceedings, International Conference on Control, Automation
and Systems. ICCAS, 2001.
[ pdf ]
Computer security is a very important issue these days. Computer viruses, worms, Trojan horses, and cracking are prevalent and causing serious damages. There are also many ways developed to defend against such attacks including cryptography and firewalls. However, it is not possible to guarantee complete security of computer systems or networks. Recently much attention has been directed to ways to detect intrusions and recover from damages. Although there have been a lot of research efforts to develop efficient intrusion detection systems, little has been done to facilitate the interaction between intrusion detection systems and users. Reports and presentations of current states of the computers or networks to the user in a format that is easy to understand as well as specification of security policies from the user are important aspects of intrusion detection systems. We present our first steps to develop natural language interface between a user and an intrusion detection system using minimalist transformational grammar and Prolog. Our system takes a pseudoEnglish query from the user, parses it using CYK like algorithm, converts it into a formula in Horn logic, feeds it to Prolog, and then gets the answer back in a simple format.

This file was generated by bibtex2html 1.98.
Unpublished
Papers
[6]  Gregory M. Kobele. Parsing ellipsis efficiently. ms, 2017. [ pdf ] 
[5]  Gregory M. Kobele. Decomposing montagovian dynamics. ms, 2016. [ pdf  code ] 
[4]  Gregory M. Kobele. Inverse linking in minimalist grammars. ms, 2008. [ pdf ] 
[3]  Gregory M. Kobele. Parsing ellipsis. ms, 2007. [ pdf ] 
[2]  Gregory M. Kobele. Generating Copies: An investigation into structural identity in language and grammar. PhD thesis, University of California, Los Angeles, 2006. [ pdf ] 
[1]  Gregory M. Kobele and Harold Torrence. The syntax of complement clauses in Asante Twi. paper presented at the 35th Annual Conference on African Linguistics, 2004. [ pdf ] 
This file was generated by bibtex2html 1.98.
Slides
[11]  Gregory M. Kobele. Representations in syntax. Slides for Thomas Müntzer lecture (IGRA Klausurtagung), 2017. [ pdf ] 
[10]  Gregory M. Kobele. Higher order structures in minimalist derivations. Slides presented at TAG+13, 2017. [ pdf ] 
[9]  Gregory M. Kobele. Lfinterpretation, compositionally. Slides for talk at UChicago, 2016. [ pdf ] 
[8]  Gregory M. Kobele. Making copies: Insights from computation. Slides for the Workshop on Replicative Processes in Grammar, Leipzig, 2015. [ pdf ] 
[7]  Gregory M. Kobele. A derivational approach to phrasal spellout. Slides for a talk presented at BCGL 7, 2012. [ pdf ] 
[6]  Gregory M. Kobele, Evelyne Lagrou, Felix Engelmann, Titus von der Malsburg, Ryan Musa, Sabrina Gerth, Ruben van de Vijver, and John T. Hale. Incremental processing difficulty in crossserial and nested verb clusters. Poster for AMLaP 2012, 2012. [ pdf ] 
[5]  Gregory M. Kobele and Jens Michaelis. Derivational order and ACGs. Slides for talk presented at ACG@10, 2011. [ pdf ] 
[4]  Gregory M. Kobele. Eliminating sidewards movement. slides presented at Generative Grammatik des Südens (GGS), 2009. [ pdf  html ] 
[3]  Gregory M. Kobele. On hypothetical reasoning and association with focus. slides presented at the 9th Szklarska Poreba Workshop, 2008. [ pdf ] 
[2]  Jeffrey Heinz, Gregory M. Kobele, and Jason Riggle. Exploring the typology of quantityinsensitive stress systems without gradient constraints. handout of talk at the 79th LSA, 2005. [ pdf ] 
[1]  Gregory M. Kobele. Pregroups, products and generative power. slides presented at the Chieti Workshop on Pregoups and Linear Logic, 2005. [ pdf ] 
This file was generated by bibtex2html 1.98.