BINDING, LOCALITY, AND SOURCES OF INVARIANCE1 Eric Reuland Utrecht institute of Linguistics OTS
[email protected] 1. Preamble The import of a theory can be judged along two dimensions. One is how well it enables us to understand what we know. The other is how well it guides us in exploring what we don't know. Although it is hard to 'objectively' assess the success of a theory along the second dimension, it is by far the most important one in the ultimate success of a theory. One of the assumptions underlying much current work in grammatical theory is the inclusiveness condition. In the formulation of Chomsky (1995:228): "any structure formed by the computation (….) is constituted of elements already present in the lexical items selected for N (=the numeration, EJR); no new objects are added in the course of the computation apart from rearrangements of lexical properties (in particular, no indices….)". A condition of this type cannot be "proven" or "falsified". However, we can assess its success in the long run. The inclusiveness condition enforces a principled distinction between grammatical computations complying with it, and grammatical computations that don't. By the same token it enforces a principled distinction between language phenomena that can be captured by computations complying with it, and language phenomena that cannot. It therefore cuts up the empirical pie in a way that can be assessed: Do the language phenomena at each of the sides of the cut form natural classes in independent respects? My experience so far is that the inclusiveness condition does indeed provide an important and clear-cut distinction between computations obeying it - what one may call morpho-syntactic computations - and computations that don't – computations in logical syntax in the sense of Reinhart 2006. This distinction does not appear to be generally recognized. The initial formulation of this book's theme only refers to "syntactic computations". As it says in the original conference call: "A key consideration is that on minimalist assumptions locality properties of syntactic dependencies are expected to fall out from the way syntactic computation (including its locally minimized use of computational resources) is defined, as well as the way it interfaces with external systems." This implicitly raises the following question: (1)
Is all linguistic computation "syntactic"?
And, depending on the answer, is locality a property of all linguistic computations or only of syntactic ones? Or, perhaps, is it only a property of a subset of the syntactic computations, and if so what kind of subset? These questions and their answers also reflect on further issues from the conference call summarized in (2): (2)
i.
How many—and which—basic notions of locality are needed, and which are dispensable?
1
I would like to express my gratitude to the organizers of the Conference on Minimalist Approaches to Syntactic Locality 26-28 August 2009, Budapest, for this very stimulating event, and to the participants for very useful discussion.
1
ii. iii. iv.
How is syntactic locality affected at the interfaces? How do interface processes affect, or determine, syntactic locality? Precisely how do anti-locality effects relate to locality effects? What role does ‘reference set economy’ play in syntactic locality, if any?
For reasons of space I will say very little on most of these questions, except for (2iii). My take on (2i) is that I see little reason to assume more than one notion of locality that is specific to linguistic computations; roughly, the phase based locality of Chomsky (2001, 2008). In addition, as I will argue at length on the basis of anti-locality effects in binding, there is a general principle that does not allow one to distinguish identicals in a space lacking sufficient structure. This is in effect my answer to (2iii), and indirectly to (2ii). I will include some discussion of economy principles, specifically the role of cancellation in the sense of Chomsky (1995). 2. A Window: anaphoric dependencies. My window into these questions is provided by anaphoric dependencies in comparison to dislocation. Re (1), consider (3) and (4): (3)
a. b.
What did the crash destroy? *What did Lehman Brothers go bust after the crash destroyed?
(4)
Every banker went bust after the crash destroyed his assets.
As we know, any theory must express that what in (3a) does double duty. It signals that the sentence expresses a question, and it expresses that the answer involves specifying the value of the object of destroy. Thus any theory must express that what in (3a) must be connected to the role the verb destroy assigns to its object, and any theory must express that the connection between what and this role cannot be established in (3b), where the connection has to be established into an adverbial clause. From a general perspective, (4) is comparable to (3b). In (4) a connection has to be established between his and every banker. The latter dependency, however, is not sensitive to the same conditions on locality. In (4) the dependent element his is contained in an adverbial clause, whereas the antecedent is in the matrix clause, but this has no influence on the dependency being formed. In the ideal world such a contrast follows from the nature of the computations involved. In theory independent terms, the dependencies in (3) represent "filler-gap" relations. In the framework of Chomsky (1981) the gap was represented by a coindexed trace, as in (3)'. (3)'
a. b.
Whati did the crash destroy ti? *Whati did Lehman Brothers go bust after the crash destroyed ti?
Traces and indices are not morpho-syntactic objects, however. Hence they cannot be part of the numeration. Deriving the representations in (3)', therefore, violates the inclusiveness condition. The double duty of what can be naturally expressed, however, by using the same morphosyntactic object twice in the derivation, as in (3)": (3)"
a. b.
What did the crash destroy (what)? *What did Lehman Brothers go bust after the crash destroyed (what)?
2
Instead of introducing different types of objects – a trace and its binder-, we have occurrences of the same morphosyntactic object in two positions, and, ideally, general principles governing the phonetic realization of these occurrences, for which I refer to Nunes (2004). Thus, the dependencies in (3) crucially involve the displacement property of natural language (Chomsky 1995). With indices gone we also need a way to represent that the two occurrences of what are occurrences of the "same element". Since both correspond to one element in the numeration, this identity relation is indeed represented in the derivational history. This should be enough to guarantee that the computation can see whether certain elements in the structure are copies.2 Although nontrivial matters of execution remain (specifically of how to maintain identity, despite the fact that properties determined by the immediate environment - such as Case- may cause differences between the copies) I will not discuss them here, but assume that they are in principle solvable. This model of representing the dependency in (3) is crucially not available in (4). There is no gap, hence no displacement in any obvious sense is involved. There is no workable way in which his can be said to be an occurrence of every banker, without losing the explanation for why what in (3) cannot be spelled out twice. The canonical way to represent the dependency in (4) is given in (5): (5)
Every banker λx (x went bust [after the crash destroyed x's assets])
In the representation of (5), the interpretive dependency between his and every banker is captured by what Reinhart (2006) refers to as logical syntax binding. (6)
Definition of A-binding (Reinhart (2006) (logical-syntax based definition)3 α A-binds β iff α is the sister of a λ-predicate whose operator binds β
The definition in (6) covers binding relations that are not syntactically encoded (though not dependencies that are only represented in discourse).4 The process relating (5) to (4) can be characterized as in Reinhart (2006: 171)5: Move the subject Every banker from its argument position, adjoining it higher up in the structure (by Quantifier Raising – QR - in the sense of May 1977), substitute a variable for its original position and prefix the minimal category containing the subject and the pronominal to be bound (here his) with λx. If the variable translating his and the variable resulting from QR are chosen identically – which is just an option, but not enforced, since his may refer to someone else - both will be bound by the prefixed λ-operator and end up being A-bound by the original argument in its adjoined position. It is important to see that this logical "machinery" is just what is needed to make linguistic binding precise. Crucially, however, deriving (5) from (4) in this manner violates the inclusiveness condition. The pronoun his is replaced by a variable, even if quantifier raising every banker initially left a copy, this copy has been replaced by the variable x under the simultaneous prefixation of the expression λx. Neither lambda's nor variables are part of the numeration. 2
For an alternative way to represent identity between such occurrences see Frampton (2004).
3
"Logical syntax" is a representation of linguistic structure that is sufficiently fine grained to feed the inference system. For instance, from we voted for me one may infer that I voted for me but from we elected me one cannot infer that I elected me. Hence the collective distributive distinction must be represented at that level. 4 Note, that the canonical c-command requirement on binding follows from compositional interpretation procedures. 5 See Heim and Kratzer (1998) for an alternative. 3
The question is, then, what type of computation is involved. One line to pursue is that deriving (5) from (4) is really a step in the interpretation process. Interpretation by its very nature involves moving between modes of representation. Hence, it intrinsically violates inclusiveness. But, let's see if we can stay closer to a syntactic view. An alternative line would argue that what (5) sets out to capture is intrinsically linguistic, hence the argument about the nature of interpretation, though in principle valid, does not necessarily apply here. If so, the nature of the derivation of (5) from (4) should be reassessed. Conceptually, though not always technically, reassessing the status of the lambda is trivial. What it does is regimenting the relation between binders and variables, but in principle that can also be achieved by interpreting (properties of) the configuration itself. Hence a segmental lambda can in principle be dispensable. A subsequent question is what variables are. Are they expressions from a different domain, or could they be part of the numeration? In fact, variables appear to have a dual status. On the one hand they are linguistically real as open positions in predicates. If one takes a predicate like destroy the fact that it has both a destroyer role and a destroyee role to be discharged is visible on the lexical entry, hence represented in the numeration. On the other, the most minimal elements able to receive thetaroles are pronouns. They are also candidates for the status of variables. Note that pronouns and nouns share the fact that they have φ-features. Suppose the other face of variables is that they are in fact φ-feature bundles. If so, the occurrence of x in x went bust ….. can be seen as a morphosyntactic residue of the expression (every) banker, for instance resulting from deleting all components of the lower copy of every banker that are not required to supply a minimal argument in the local environment. Similarly, the variable resulting from his would be just the φ-feature component of his. From this perspective, pure φ-feature bundles are the instruments for a segmental representation of variables. Although this is compatible with the position that the vocabulary of "logical syntax" does not go beyond what is supplied by the numeration, hence obeys inclusiveness, it does not yet give us binding, as can be seen in the following rendering of (5), without the λ-expression: (5)'
[Every bankerφ [φ went bust [after the crash destroyed φ's assets]]]
By the properties of the derivation the φ-bundle on every banker and the residual φ in "φ went bust" are copies. However, φ and the φ in "φ's assets" are not. Even if φ and φ have the same feature composition, they are different objects, just like the two occurrences of cats in cats were chasing cats represent different elements in the numeration, and are interpreted as different groups of cats. How, then, can we represent the fact that φ and φ may enter into an interpretive dependency? Moving φ and substituting it for φ would violate the principle of recoverability of deletions, since both φ and φ have a full feature specification (see Chomsky 1995, Reuland 2001, and Reuland 2011). Interpreting φ and φ by assigning both the same value from the domain of discourse as in standard cases of coreference/covaluation doesn't work either. This is available in cases such as John went bust after the crash destroyed his assets where John and his can be assigned the same individual j as their values, but not in (5). Every banker does not denote an individual (Heim 1982), and φ doesn't either. However this can in principle be resolved by just a minimal extension of the scope of the interpretation system. The interpretation system values expressions by assigning them mental objects reflecting phenomena in the outside world. Linguistic expressions themselves are also phenomena in the outside world. Hence there can be mental objects reflecting them. Hence, a system without restrictions on possible values should also allow the following option: (7)
Expressions can be valued by expressions
4
That is, the crucial factor allowing a bound variable interpretation of pronouns is that φ can be assigned φ as its value, where its subsequent interpretation is derivative of the interpretation of φ. Put into an evolutionary perspective, language as we know it is the result of three crucial steps: i. the development of lexical items (LI's) as arbitrary form-meaning combinations; ii. combinability of LI's into expressions that can receive values (Hauser, Chomsky, Fitch 2002; Chomsky 2008); iii. the possibility for expressions to value expressions (the core mechanism underlying quantification). So, in reassessing bound variable representations and the dependencies they express in terms of inclusiveness, we are left with one extra-syntactic residue: co-valuation as the process that establishes the dependency. In this sense, variable binding is not syntactically encoded. It is this residue that exempts variable binding dependencies from locality. This raises two further types of questions: i. Why is it the case that anaphor binding is subject to locality, and ii. Why is pronoun binding subject to anti-locality? Ideally, locality conditions on binding are to be explained by independent properties of the computational system. I will review how such an explanation is provided by the approach developed in Reuland (2001, 2005a,b, 2008, 2011). I will then show how this approach helps us understand a number of rarely discussed facts that are quite puzzling from the perspective of the canonical binding theory. Let's then first review a number of basic issues in binding. 3. An initial perspective on binding: Canonical Binding Theory (Chomsky 1981) Since Chomsky (1981) the following two conditions form the core of the canonical binding theory: (8)
Canonical Binding Theory (CBT) A. An anaphor is bound in its local domain (governing category) B. A pronominal is free in its local domain (governing category)
Binding is defined in terms of coindexing and c-command, and the local domain for an element α is roughly the minimal category containing α, an element assigning Case or a Θrole to α, and a subject. Over the last decades the CBT had to face three types of problems, briefly discussed below. First of all, there are significant empirical problems. There is a wide range of variation in anaphoric systems cross-linguistically. In (9) I will list a few. (9)
Illustration of cross-linguistic variation There are systems with more distinctions than just the distinction between anaphor and pronominal. For instance (limiting ourselves to a very small subset of cases to exemplify the point): o Dutch has a 3-way system: pronominals such as hem 'him', simplex anaphors (henceforth, SE-anaphors) such as zich '"himself"', complex anaphors (SELFanaphors) such as zichzelf 'himself' (Koster 1985, Everaert 1986); o Icelandic, and Norwegian with the other mainland Scandinavian languages) have a 4-way system: Pronominals, SE-anaphors, SE-SELF and PronominalSELF (e.g. Hellan 1988). In addition to structural conditions, also properties of predicates play a role in determining binding possibilities: o English has John washed (no object) with a reflexive interpretation, but not *John hated; o Dutch has Jan waste zich (a SE-anaphor), but not *Jan haatte zich, etc.). "Reflexive" clitics in Romance and Slavic languages do not behave like canonical anaphors. 5
There is cross-linguistic variation in the binding domains o Long-distance binding of Scandinavian seg/sig versus more local binding of Dutch zich and German sich (Everaert 1986, Hellan 1988, Reuland 2011) Under certain structurally defined conditions certain anaphoric forms need not be bound o Free ("logophoric") use of himself in English John was hoping that Mary would support *(no one but) himself o Free ("logophoric") use of sig in Icelandic (Thráinsson 1991). Certain languages allow locally bound pronominals o him in Frisian: Jan waske him 'Jan washed', and in Old English: he cladde him 'he dressed' o 1st and 2nd person pronominals across the board: Ich wasche mich, jij wast je, nous nous lavons, etc. but not in their English counterparts: I wash myself/*me, you wash yourself/*you, etc. Certain languages require a special form for local binding, but do not require that form to be locally bound o Malayalam (Jayaseelan 1997) raamani tan-nei *(tanne) sneehikunnu Raman SE-acc self loves Raman loves him*(self) raamani wicaariccu [penkuttikal tan-nei tanne sneehikkunnu ennə]. Raman thought girls SE-acc self love Comp 'Raman thought that the girls love himself' Peranakan Javanese (Cole, Hermon, Tjung, Sim & Kim 2008) [Gurue Tonoj]i ketok dheen*i/j/k nggon kaca. teacher-3 Tono see 3sg in mirror 'Tono's teacher saw him/her in the mirror.' Alij ngomong nek aku pikir [Tonoi ketok awake dheeni/j/k nggon kaca]. Ali N-say COMP 1sg think Tono see body-3 3sg in mirror 'Ali said that I thought that Tono saw himself/him in the mirror.'
Summarizing, we see a great deal of diversity. More in particular, Modern English and Iceland, but also Malayalam and Peranakan Javanese, show us that there is no absolute local binding obligation for "anaphors". Frisian and Old English show that there is no absolute obligation of local freedom for "pronominals". It follows that it is impossible to provide an independent characterization of anaphors versus pronominals in terms of an intrinsic obligation to be locally bound or free. Hence, the features [+ anaphor] and [+ pronominal] (Chomsky 1981 and subsequent work) are not primitive lexical features. Second, as we already saw in the preamble, there is a theoretical problem, since the CBT uses indices, which violates the inclusiveness condition. And thirdly, there is a conceptual problem, succinctly formulated as a question: Why would there be conditions on binding at all? 4. Towards an answer Let's take as a starting point the minimalist conception of grammar as a computational system in the sense of Chomsky (1995), defining a mapping between form and interpretation. If so, one must distinguish between three sources of cross-linguistic invariance (language universals): (10)
Sources of invariance (Reuland 2005a)
6
Type 1. Necessary properties of computations, modulo a medium in which they take place; Type 2. Economy of computation, modulo resource types and restrictions; Type 3. General properties of computations specific to language.
Types 1 and 2 correspond to what Chomsky (2005) calls third factors in language design. With Chomsky, I will assume that the computational system itself involves two types of operations. Complex linguistics objects are formed from elementary objects by the operation of Merge (with external Merge applying to separate objects in the work space, and internal Merge reusing subparts of an object to extend it). Dependencies may also be established on the basis of matching requirements between feature structures: Agree, see below for some more detail. In such a system GB style conditions cannot be formulated directly. Rather, one should distinguish between micro-universals and macro-universals: micro-universals concern the elementary properties of the computational system; macro-universals result from the interaction of elementary processes. A specific example of a micro-universal that is also a type 1 universal is given in (11): (11)
Inability to Distinguish Indistinguishables (IDI). A computational system requires a work space allowing it to distinguish different occurrences of identical expressions.
As we will see in more detail below, IDI is the principle underlying a significant part of condition B. As discussed, in a computational system without indices the general notion of variable binding has to be represented by covaluation of φ-feature bundles. I will assume that for empirical purposes variable binding under this conception can still be modelled on the basis of Reinhart's definition of A-binding given in (6). Since it involves a real semantic residue, let's refer to it as semantic binding. Not all dependencies covered by the CBT can be reduced to semantic binding. Anaphor binding is subject to locality conditions. Since locality conditions are in the province of syntax, this means that in the case of anaphor binding the dependency is encoded by syntactic means. As we concluded on the basis of the cross-linguistic pattern, the notion of an anaphor cannot be defined in terms of a specific feature. Rather, I will say that a particular element can be used as an anaphor iff it can enter into an interpretive dependency with an antecedent that is encoded by a syntactic operation. The two available operations are Merge and Agree, as we saw. Hence I will review how these operations can help establishing antecedency relations, basing myself on Reuland (2001, 2005a,b, 2008, 2011).6 I will discuss two types of anaphors, simplex anaphors such as Dutch zich, Norwegian seg, and Icelandic sig (briefly SE-anaphors), and complex anaphors; many of these are formed of a pronominal or a SE-anaphor and the element SELF or a cognate as in English himself, Dutch zichzelf, Norwegian seg selv, Icelandic sjalfan sig (hence, SELF-anaphors). 4.1. Binding of SE-anaphors SE-anaphors are essentially pronominals with a deficient specification for φ-features. Zich, seg, and sig are all specified for 3rd person, but lack a specification for number and gender. They cannot be used deictically. (12) illustrates the use of a SE-anaphor.
6
See Hornstein (2001), Kayne (2002), Boeckx et al. (2007), Zwart (2002), and subsequent work, Safir (2004) for alternative approaches, see Reuland (2011) for comparative discussion. 7
(12)
Jan voelde [zich wegglijden] Jan felt [SE slip away]
Here the finite verb voelde 'felt' has an ECM complement, with a SE-anaphor as its subject, avoiding complications to be discussed below that may arise if the SE-anaphor is a direct object. The question is how zich ends up being bound by Jan. Three questions come up: i. How is the dependency encoded? ii. Why is a 3rd person pronoun instead of zich ruled out? And iii. Why does zich have to have an antecedent? First consider the encoding. The relevant properties of the configuration are indicated in (13), with the external argument Jan (EA) in the low position before movement to T: (13)
[Tns [EA [v* [ V [SE ....]]]]]
As shown in Reuland (2001, 2005b), binding of SE-anaphors can be syntactically encoded by feature chains (Pesetsky & Torrego 2004, henceforth P&T) established by check/agree via a series of probe-goal relations, essentially based on structural Case being analyzed as uninterpretable Tense. First a Subject-v*-T-dependency is established by Tns's unvalued interpretable Tfeature probing EA's uninterpretable and unvalued T-feature and subsequently probing again for v*'s valued uninterpretable T feature. This leads to valuation of T on Tns and the Subject. Note that this dependency is not based on "φ-feature agreement". The dependency extends to the SE-anaphor given that SE-anaphors have unvalued interpretable φ-features in addition to unvalued uninterpretable structural accusative Case, and that Tns and v* have unvalued uninterpretable φ-features. The subject DP has valued interpretable φ-features to provide a value for the unvalued instances. SE moves to the edge of v* due to v*'s object EPP feature, which puts it in the proper position for valuation. The process of chain formation is set in motion by the structural Case feature, since only structural Case is an independent syntactic trigger. Thus, the φ-feature dependency gets a free rider on the Case dependency. It is included in the ride for interface reasons. With P&T I assume that only full feature bundles correspond to morpho-syntactic objects that can be interpreted. The T-dependency extends to the full φ-feature dependency in order to make the dependency visible at the C-I interface. The dependencies are summarized in (14), with EA providing the required valued and interpretable instance of [φ]:7 (14)
[Tnsuφ [SE uφ [EA valφ [v* uφ [ V (SE uφ).... ]]]]]
How do we get from here to binding? As in the case of pronoun binding discussed earlier SE is merged as an independent element from the numeration. In P&T's approach the exchange of values in the formation of a feature chain unifies the features it contains. For two occurrences F1 and F2 of a feature F on different elements in the numeration, let's say that F1 is unvalued but interpretable and F2 is valued but uninterpretable, valuation by Agree creates a feature chain , which is one syntactic object with two instances of F as its members. F1 and F2 don't qualify as different occurrences after valuation. By valuation, 7
As Hicks (2009) points out, this derivation assumes a second specifier of v*. My implementation here is in line with Chomsky (1995), and subsequent work, where such an Edge position (reflecting Object shift) is also postulated for wh-movement from object position. Note that in Dutch and Scandinavian this edge position is independently needed for Object shift.
8
feature values are copied/overwritten, hence all the tokens of φ in (14) share instances of one feature. As we saw with copies in dislocation structures, copying/overwriting of feature values encodes identity. Hence, we do have a syntactic representation of the binding relation. Let's see more precisely why this is allowed with SE-anaphors and not with pronominals. As discussed in Reuland (2001) and in more detail in Reuland (2011), the crucial difference between SE and a pronominal is that SE is underspecified for number. Other features of pronominals, such as person and category are interpretive constants; that is, each occurrence of such a feature on different elements in the numeration makes precisely the same contribution to interpretation. They are all interchangeable. Hence, overwriting one occurrence of such a feature by another one is allowed. Number, however, is different. Overwriting one occurrence of a number feature by another one violates the principle of recoverability of deletions, hence is blocked. Consequently, with SE in the position of (14) true identity of feature bundles can be effected; with a 3rd person pronominal in the position of SE in (14) no chain can be formed. This answers the question of why the pronominal cannot enter a chain. It leaves open, however, why this option cannot be simply bypassed. That is, why don't we have (12)' alongside (12), with hem semantically bound by Jan? (12)'
*Jan voelde [hem wegglijden] Jan felt [him slip away]
The answer resides in economy. The envisaged interpretation is (12)": (12)" Jan x. (x felt [x slip away]) Since deriving this representation by encoding the dependency in the syntax is blocked as a violation of the PRD, this derivation is cancelled (Chomsky 1995). Consequently, other options to derive (12)" from the same numeration, in this case by semantic binding, are blocked as well. Note, that we don't have to compare the zich- and the hem-options , invoking global economy. The hem-option is just blocked in its own right. This leads to the final question: Why does zich have to have an antecedent? In fact, nothing intrinsic requires an element with a deficient φ-feature specification to acquire a full specification. It is just the syntactic environment that mechanically enforces the chain. 4.2 Binding of SELF-anaphors As discussed extensively in Reuland (2005a, 2008), binding of SELF-anaphors is syntactically encoded by Internal Merge ("covert" movement adjoining SELF onto the predicate), as illustrated in (15). (15)
a. b.
John admires himself John SELF-admires him-self
Informally, the effect of SELF-movement on the interpretation is given in (16) (see Reuland and Winter 2009 for a formal analysis): (16)
||V || || SELF ||
I assume that the semantics of SELF is that it denotes a reflexive relation. SELF (like bodyparts in general) is inherently relational (elements in the set {x| SELF (x)} necessarily bear the being-a-SELF-of relation to some individual, hence they can stand proxy for that individual.
9
As (16) expresses, the relation denoted by V is intersected with the reflexive relation denoted by SELF, with intersection being the canonical interpretation of adjunction (Chomsky 2008). Thus, the locality of condition A of the CBT reduces to the locality of SELF-movement, which in turn reduces to the locality of head-movement. As already noted in Reuland (2001), the SELF-movement approach gives an immediate explanation of the distribution of exempt anaphors in English: (17)
Max was happy that the queen invited Lucie and himself for tea
In (17) movement of SELF onto the predicate invite is barred by the coordinate structure constraint. Other constraints cause exemption in adjunct cases. Of course, the question is why SELF would have to move. As (17) shows, there is no intrinsic obligation for SELF to do so. A simple answer is that it moves for reasons of economy. As argued in Reuland (2001, 2008), encoding a dependency in the syntax is blind, and cheap: it minimizes the number of crossmodular steps in the deriving the interpretation. In the absence of evidence to the contrary, such a third factor explanation is the one to entertain. All in all we can see that locality conditions on anaphor binding can be reduced to conditions on Internal Merge and on Check/Agree. Thus, to partially answer one of the major conceptual questions we started with: There is no condition A as a special condition on binding, only economy. What about condition B? 4.3. Deriving Condition B Antilocality effects in binding (condition B) come in two types. One type we have already discussed. 3rd person pronominals cannot enter a φ-feature chain with their antecedents. This, together with economy, explains why they cannot be locally bound in Dutch as in (12)'. Below we will discuss why they can in Frisian and Old English. But first let's discuss an antilocality effect on SE-anaphors. As is well-known since Everaert (1986), there is a class of verbs in Dutch and other languages that don't allow a SE-anaphor as an object . Typical representatives of this class are subject experiencer verbs such as haten 'hate', bewonderen 'admire', and kennen 'know'. So, we find (18a) with a SELF-anaphor, but not (18b) with a SEanaphor. (18)
a. b.
Alice bewonderde zichzelf Alice admired herself *Alice bewonderde zich
The question is why, and why this class? As noted in Reinhart and Reuland (1993), reflexivity must be licensed. If one consults the typological literature this is at least a very strong tendency. Schladt (2000) gives an overview of over 140 languages from many different language families which all require a special marking of reflexive predicates instead of simply having a locally bound simplex anaphor or pronominal. Heine and Miyashita (2008: 172, 175) state that "reflexivity and reciprocity are universal concepts in so far as all languages can be expected to have some grammaticalized expression for both." See also Moise-Faurie (2008:107). Progress, however, is not served by accepting tendencies at face value. If one sees that language after language does something special to mark reflexivity, the proper response is to take this seriously, look for a general principle explaining this, and then proceed to investigate under what conditions the phenomenon does not show up. Languages show a variety of strategies to license reflexives. They use SELF-elements, bodyparts, they may double the pronoun, they use special markings on the verb, infixes, suffixes, clitics, etc.
10
As argued in Reuland (2005a, 2008), the reason resides in the IDI given in (11), a type 1 universal. Any computational system must be able to distinguish between different occurrences of identical expressions. This has consequences for reflexive predicates. Consider a transitive predicate VTrans (x, y). If one argument of a transitive predicate binds another, and no further operation applies, the result is … x. VTrans (x, x). This representation of the predicate has two "occurrences" of an identical variable. I put "occurrences" within quotes. The grammatical system can only distinguish these two tokens of x as different occurrences if the workspace allows this, in terms of order or other structure. At the C-I interface there is neither order (a PF property), nor pure structure (only terms are readable, not intermediate projections such as V', see Chomsky 1995). Hence these tokens cannot be distinguished as separate occurrences. Effectively the result is, therefore, VTrans (x). This creates a mismatch between the syntactic (transitive) and semantic (intransitive) valence of the predicate. Hence for a well-formed structure to obtain, one of two strategies must have applied.8 One option is that the arguments are formally kept apart and transitivity preserved. In this type of strategy the bound element occurs with an expression like SELF (English, Dutch, etc.), a body-part noun (Georgian, Basque, and many others). In all such cases the variable is protected. The structure of (18a) at the relevant level is then as in (19a): (19)
a. b.
Alice λx. (admired (x, SELF(x)) Alice λx. (admired (x, f(x))
There are other variants of this strategy that all have in common that the bound element is embedded in a complex structure of the general type (19b). The arguments are not only syntactically distinct, but also semantically, since the complexity marker maps the x onto some element f(x) such that f(x), though distinct from x can stand proxy for x. Alternatively, the mismatch is resolved by an operation on the argument structure of the predicate. One instance of this strategy is that the predicate is morpho-syntactically detransitivized. So, instead of *VTrans (x), one uses V'Intrans (x), and the thematic roles [θ1] and [θ2] of VTrans are bundled as in V'Intrans [θ1, θ2] (x) and jointly assigned by V'Intrans to x. This requires no special addition to the theory. As Reinhart (2002), Reinhart and Siloni (2005) (R&S) have shown, natural language has a set of operations on argument structure, of which bundling of thematic roles is just one. The application of such an operation may be overtly reflected in morpho-syntax (Russian myt' (to wash) – myt'sja (to wash oneself) but it need not be, as we see in English John washed. As argued by R&S, bundling can apply in the lexicon, or in the syntax. In lexicon languages bundling is typically lexically restricted. It generally obtains with grooming verbs such as wash, dress, etc., but not with subject experiencers (see Reuland 2011 for discussion of why this may be so). In syntax languages it is free. French, which is a syntax language, therefore makes no distinction in reflexive marking between a grooming verb such as laver 'wash' (Jean se lave 'Jean washes') and a subject experiencer verb such as haïr 'hate' (Jean se haït 'Jean hates himself'). In lexicon languages bundling also affects the Case of the verb. In languages with a "marginal" Case system, Case may disappear under bundling. Hence in English we find John washed with a reflexive interpretation, but no object. That we also find John washed himself is due to the fact that there is no obligation for the transitive entry wash to undergo bundling.
8
IDI may seem reminiscent of Farmer and Harnish's (1987) Disjoint Reference Presumption (DRP), but is grounded in the computational component of our cognitive system. IDI and DRP make different predictions for non-action verbs, in favor of IDI. 11
It is just a reflexive instantiation of the transitive predicate wash, on the same footing as the transitive predicate hate. In Dutch, which is also a lexicon language, bundled entries have the simplex anaphor zich. In Dutch, which has a less marginal Case system than English, bundling leaves a structure Case residue on the verb, which must be checked. This is the role of zich. So, in Dutch we find Jan waste zich instantiating the reduced entry, Jan waste zichzelf as an instantiation of the non-reduced entry, but only Jan haatte zichzelf, since haten 'hate' cannot undergo bundling and reduction. Note, that with lexically reflexive verbs zich crucially is not interpreted as a semantic argument. This is perhaps even clearer in the case of verbs such as schamen 'shame', or gedragen 'behave' that simply lack a second argument, and which nevertheless require zich. Under the chain formation approach this requires no stipulation. When zich enters a chain with its antecedent, simply one syntactic object is created. Let's now briefly discuss two apparent problems for the analysis so far: Local binding st of 1 and 2nd person pronouns and local binding of 3rd person pronouns in languages like Frisian and Old English. 4.4 Local binding of 1st and 2nd person pronouns Many languages environments allow locally bound pronominals in 1st and 2nd person. For instance, in Dutch, and most other Germanic languages we find the pattern in (20a), in French and other Romance languages we find the pattern of (20b). (20)
a. b.
Ik was mij; I wash me; Je me lave;
jij wast je; wij wassen ons; you wash you; we wash us; tu te laves; nous nous lavons;
jullie wassen je you wash you vous vous lavez
Note, that the role of mij, je, etc. in Dutch is exactly like the role of zich in 3rd person. They occur as 'syntactic' objects of verbs with lexical bundling where they check residual Case. Furthermore they show the effects of IDI, since hate-type verbs require mijzelf, jezelf, etc., when reflexive in 1st or 2nd person. In section 5 below we will see that there is independent evidence that they do indeed enter a syntactic chain with their antecedent. That 1st and 2nd person pronouns allow chain formation with their antecedents is the immediate effect of the way in which the PRD regulates chain formation. The reason the PRD blocks chain formation with 3rd person pronominals, is that different occurrences of the number feature in the numeration (or perhaps a property dependent on number) are not interpretively equivalent. Overwriting one occurrence by another deletes content that is not recoverable. 1st and 2nd person pronouns, however, have a fixed interpretation within one reportive context. Thus, replacing the features of one occurrence of me by the features of another occurrence changes nothing in the interpretation. No information gets lost; hence the PRD is not violated. This leaves the case of Frisian and Old English, but, as we will see, also some lesser known properties of Modern English. Since these form an interesting complex of facts, I will discuss them together in a separate section. First, however, I will discuss an issue in economy. 5. Economy, binding and chains Economy considerations in grammar we already find as early as Reinhart (1983). Reinhart argues that using a variable binding strategy to interpret a pronoun entails closing an open expression at a point where using the coreference option would require it to stay open, since coreference closes the expression only after a further search through the discourse. Assuming that keeping an expression open requires keeping it in working memory, and keeping material in working memory carries a cost, variable binding is less costly, hence preferred. Where the
12
option exists, recourse to coreference is not considered. This is the basis of her original approach to Rule I. This correctly captures that in sentences like (21) the coreference strategy cannot be bypass the prohibition of binding (by the canonical condition B in Reinhart 1983, by conditions on reflexivity and chain formation in the approach outlined above). (21)
*Oscar hates him
However, as discussed in Reinhart (2006), this incorrectly rules out the availability of the strict reading in VP-deletion contexts such as (22). Hence, Reinhart pursues an alternative. (22)
Oscar loves his dog and Charles does too.
For a full discussion I have to refer to Reuland (2010, 2011). For the moment let the following summary suffice. As shown by Vasic et al. (2006) and Koornneef (2008) processing economy does play a significant role. We do indeed find processing evidence for the hierarchy in (23): (23)
Economy hierarchy: syntax < semantics< discourse
However, it does not give rise to categorical effects. Categorical effects obtain, when a derivation is cancelled. In that case all derivations in the same reference set are inaccessible. This can be formulated as in (24): (24)
Absolute economy: rejection is final (inspired by Reinhart 2006) If the derivation of a particular interpretation of a certain expression in a given component of the language system violates a fundamental principle of grammar (Principle of Recoverability of Deletions (PRD), feature coherence) this derivation is cancelled (Chomsky 1995). Hence access to subsequent components in the economy hierarchy to derive precisely the same interpretation for the given expression is blocked.
It is this type of economy that is required by the prohibition of locally bound pronominals in environments such as (12)' in Dutch. Independent evidence for this type of economy is provided by binding in Brazilian Portuguese (BP) (Menuzzi 1999). Consider binding in BP. BP has two ways of expressing the 1st person plural pronominal: a gente and nos. 1st person interpretation notwithstanding, a gente is formally 3rd person, as indicated by verbal agreement. This entails that nós and a gente differ in φ-feature composition. Yet, nós is a possible binder for a gente and vice versa. This entails that for semantic binding the semantic type prevails. This is illustrated in (25): (25)
a. b.
Nós achamos que o Paolo já viu a gente na TV We think that Paolo has already seen us on TV A gente acha que o Paolo já nos viu na TV We think that Paolo has already seen us on TV
This option also exists in a more local environment such as locative PPs, a domain where Dutch and English show no complementarity between pronominals and anaphors. Both (25) and locative PPs (that I will not illustrate here for reasons of space) are environments outside the domain of the chain condition. Crucially, however, in the environment of (26) a semantic match is not sufficient. Here, binding is ruled out unless antecedent and pronominal match in φ-features. 13
(26)
a. b. c. d.
Nós deviamos nos preparar para o pior We must prepare ourselves for the worst *A gente devia nos preparar para o pior A gente devia se preparar para o pior *Nós deviamos se preparar para o pior
In a nutshell, a gente cannot bind nos, nor can nos bind the 3rd person clitic se, which would be the proper bindee for a gente. Unlike in (25), in (26) the binding is within the domain of the chain condition. Chain formation is forced by economy if the configuration allows it. Syntactic chains are based on φ-feature sharing, and non-matching features result in an illformed syntactic object. In Chomsky (1995)'s terms, they lead to a cancelled derivation, and a cancelled derivation blocks alternatives. Consequently, it is impossible to bypass the chainforming mechanisms, and take the option in which a gente in (26b) simply semantically binds nos – which we know it would be able to do by itself. In addition to showing the crucial role of economy, facts of the BP type also show that the syntactic micro-structure down to the level of morpho-syntactic features plays a crucial role in the conditions on binding. The same reasoning applies to restrictions on locally bound pronominals in other languages. This leads us to the question of what happens when 3rd person pronominals are bound locally, as is to be discussed in the next section. 6. Anti-locality, Chain formation and structural Case Let's consider in more detail the Dutch-Frisian contrast mentioned earlier. Both Dutch and Frisian require reflexivity to be licensed. This is illustrated in (27). (27)
Reflexivity a. Jan haat *zich/zichzelf (Dutch) c. Jan hatet *him/himsels (Frisian) John hates himself
Frisian himsels is indeed an anaphor, like English himself, since it must be locally bound. It shows the same exemption effects. The important difference between Frisian and other Germanic languages resides in the domain of local binding where reflexivity conditions are independently satisfied. Here, Frisian allows locally bound 3rd person pronouns, as illustrated in (28): (28)
Chain formation a. Jan voelde [zich/*hem wegglijden]/Jan waste zich/*hem (Du) b. Jan fielde [him fuortglieden]/Jan waske him (Fr) c. John λx (x felt [x slip away])
J. Hoekstra (1994) shows that there is an independent factor interfering: there are two Case licensing strategies available for object pronominals in Frisian. One involves structural Case, the other inherent Case. These strategies are formally distinguished in feminine singular and plural common gender. In these cells of the paradigm we find a form se, with a distribution that is restricted to the DO position (it does not occur in PPs, nor in free datives). The se-form (note that it is a pronoun) cannot be locally bound. The form har(ren) occurs in all nonnominative positions, and can be locally bound. With Hoekstra I will assume that this difference is also realized in the 3rd person masculine paradigm, although there is not reflected in a morphological distinction. Reinterpreting Hoekstra's distinction in terms of the current
14
approach, it suffices to say that only a structurally case marked pronoun (se, him) bears an uninterpretable Tense feature; inherent case (on har, him) is just checked by the verb itself. Given P&T's approach, only the structurally Case marked pronoun is subject to probing by the T-system and chain formation. The inherently Case marked pronoun is not. If so, in a numeration with the inherently Case marked pronoun the initial conditions for chain formation are not met, hence the PRD cannot come into play and no cancellation results. Therefore, semantic binding of the pronoun is not blocked, hence allowed. Note, that in this approach there is no direct competition between a derivation with structural and inherent Case. The sole possibility to license a pronoun in DO position with non-structural Case is enough to allow for local binding (unlike Safir's 2004 approach, which is based on direct competition). Interestingly, this allows for mixed paradigms. Suppose we had a language in which only a 3rd person feminine singular pronoun can be licensed in DO position by nonstructural Case. That is, the cell for 3rd person masculine singular inherent is empty (as is the cell for inherent and 3rd plural). In that case we would expect local binding of 3rd person feminine singular to be possible, but not 3rd person masculine singular. Such a language in fact exists. It is an increasingly popular variant of slightly substandard Dutch, in which one easily hears zij wast haar 'she washes her', but not *hij wast hem 'hij washes him', reflecting the fact that haar has a wider distribution than hem, and is also used as a genitive (see Baauw 2000). Given this approach, one would not expect this pattern to be limited to Frisian. As we already saw, also Old English allows locally bound pronominals. Van Gelderen (2000) shows in detail that Old English is characterized by lack of structural Case for the object. One of her important observations is that Old English lacked personal passives. Hence here too, lack of structural Case allows local binding. But in fact we don’t have to move that far to the past for similar facts. Conroy (2007) discusses English personal datives (PD, see Webelhuth and Dannenberg 2006 for initial discussion). Personal datives as in (29) are wide spread in Appalachian English and Southern American English and not limited to 1st and 2nd person. Here too we see a 3rd person pronoun being locally bound. (29)
She loves her some beans
The question is then, what makes this possible. In order to answer this Conroy investigates the differences between regular reflexives and the PD, and notes that PDs do not even allow a reflexive: (30)
*She loves herself some beans
Furthermore, PDs cannot be passivized, and don't imply transfer of possession. She concludes that the PD is not a thematic argument of the predicate, hence no reflexive predicate is formed and no licensing needed. Importantly, there is also no structural Case relation between PD and the verb. Consequently, the parameter involved in the variation is independent of binding theory. So we arrive at the following descriptive generalization: (31)
Local binding of a 3rd person pronominal is possible if it does not carry structural Case (uninterpretable Tense)
This generalization follows since if this case no cancelled chain is formed; hence semantic binding is available. Lets now review the solution for our last puzzle.
15
7. To mark or not to mark: Local binding of 1st and 2nd person pronominals in English The following pattern has occasionally been mentioned as a problem for the reflexivity approach in Reinhart and Reuland (1993) (see, for instance, Heinat 2005). As we saw, in (20), in West-Germanic, Scandinavian, Romance languages, 1st and 2nd person pronominals can be locally bound, but not in English: (32)
*I washed me, *you washed you, *we washed us
Section 4.4 shows why local binding of 1st and 2nd person pronouns is allowed, which then raises the question of why the English equivalents of (20) are ruled out. As we will see now, we already have all the ingredients for an answer. As shown in section 4.3, English is a lexicon language in the sense of R&S. English also has a marginal Case system. Bundling applies in the lexicon to the transitive entry of a verb like wash, turning it into a unergative verb with a composite theta-role for the subject, and leaving no residual case to check for an object. The situation can be summarized as follows. First consider the transitive entry wash. If the subject simply binds the object without any further licensing, the configuration in (33b) results, which violates IDI. Instead we must have (33c): (33)
a. b. c.
*I wash me I x (x washes x) I washed myself
Consider alternatively the reduced entry of wash with bundling of agent and theme roles. The reduced entry in English has no Case to be checked. Consequently, we get (34a), not (34b), since the latter has an object with unlicensed Case. (34)
a. b.
I washed *I washed me
One might wonder how to understand the behavior of verbs such as behave, perjure, etc. as in (35), which allow the option of an anaphor, despite being semantically one-place verbs. (35)
a. b. c.
We behaved We behaved ourselves *We behaved us
Clearly, something special must be said about these verbs, but is it about binding or is it about argument structure? That is, where does the idiosyncrasy reside? 7.1 Structure and Case The simplest account runs as follows. Semantically, (35a) and (35b) are equivalent; both are semantically one place verbs. We know independently that in English verbs can be inserted in a simplex shell, with no Case, and in a complex shell, with structural Case. Case checking properties of a transitive verb are represented as properties of a specific segment of the lexical verb, such as "little" v (v*) (Chomsky 1995, 2008). A simplex shell arises in a numeration with no v*, a complex shell obtains in a numeration with v*. A minimal stipulation is now that behave gives rise to a well-formed derivation both in a numeration without v*, and in a numeration with v*. In the former derivation there is no Case to check, hence (35c) is ruled out. In the latter derivation the configuration in (36) arises:
16
(36)
We v*behave us Vbehave (us)
Here, us raises to check its case with v*. If us raises we v* us forms a predicate that is formally reflexive. IDI entails a mismatch in the instructions for the interpretation of v*. Thus, (36c) leads to a violation of IDI, ruling out (35c) under that derivation. Hence the predicate formed of v* must be reflexive-marked and ourselves instead of us is required. A similar reasoning applies to ECM, where locally bound pronouns are ruled out as well, as in (37): (37)
I expected myself/*me to like Lucie
As Postal (1974), and Johnson (1991) have shown, the ECM subject moves out of its thematic position into the matrix clause, where it checks it Case (see Runner 2005 for an overview): (38)
She made Jerry out [ t to be famous] (Kayne 1985)
If so, the relevant structure of (37) is (39): (39)
I1 v*expected me1 [H˚expected [(me1) to like Lucie]]
Consequently, the v* heads a reflexive predicate that requires licensing along the lines discussed. Again, no special statement about binding in English is necessary. What remains is the question of what is special about English ECM. In fact this difference reduces to an independent structural difference between English and other Germanic (and Romance) languages, that is brought out by VP-deletion, as in (40): (40)
a. b. c. d.
Alice saw the bottle and the cat did/could/…[VP -] too *Alice zag de fles en de kat deed/kon/...[VP -] ook Alice zag de fles en de kat deed/kon dat ook. Alice zag de fles en de kat [XP-] ook.
As illustrated in (40), only English allows eliding a VP complement in the complement position of an auxiliary. In Dutch, etc. the expression in the complement of the auxiliary cannot be null; eliding a larger constituent is possible, however, as illustrated in (40d). The simplest assumption to account for this difference is that in English there is a divide between v* and V. The v*, perhaps together with T, is realized as an independent syntactic constituent, that takes a VP as its complement. In Dutch, German, etc., then, the V*-V system is fusional, and, assuming the subject de kat is in spec-TP, XP in (40d) is minimally v*P. Thus, in English, the presence of structural accusative Case entails the presence of a predicate-type category v* to which conditions on reflexive licensing apply and that is not structurally coextensive with the predicate formed of V. Hence, in line with what we already saw in the "free dative"/personal dative (PD) constructions of Appalachian English, there is nothing wrong per se with local binding of 1st and 2nd person pronominals (as one might already have seen in expressions like I bought me a book, which are accepted quite generally). This analysis entails that English does not require a SELF-anaphor wherever an argument can be licensed (i.e. can get Case) in "exceptionally" Case marked complements in a way that does not involve moving into the matrix clause. That this consequence is borne out is shown by ACC-ing constructions, as will be discussed in the next section.
17
7.2 ACC-ing constructions In addition to canonical ECM, English has another clausal construction where the Case of the subject is determined by the verb of the matrix clause, the ACC-ing construction, illustrated in (41):9 (41)
a. b.
Mary hated him coming to the party. Mary hated no one coming to the party.
As shown in Reuland (1983), the ACC of the subject is not licensed directly by the (functional projections of the) matrix verb, but indirectly via the –ing-head. Characteristic of the ACC-ing construction are the absence of a wide scope reading the for subject quantifier, and the absence of raising under passive. That is, (41b) does not have the interpretation (42a), and (42b) is ill-formed. (42)
a. b.
x. Mary hated [x coming to the party] *No one was hated [t coming to the party]
In brief, there is no movement out of the ACC-position. This should also apply to SELFmovement. If so, locally bound pronominals are expected to be possible, and in fact they are, quite contrary to what the canonical BT would lead one to expect. A Google search gives examples as the following: (43)
a. b. c. d. e. f.
I can see me having to pay his therapy bill for him when he’s older Do you recall you being invited and not wanting to go? He remembers him having to grow up the same way. He now understands what rich people are like. He does not recall him having to use a compass to get to the LP/OP site We see us having to pick up huge bills They charged us for them having to clean the vent..
Consider also want-type complements, headed by a null preposition in C, which assigns or mediates Case (Kayne 1981). Again, as a Google search shows, examples with the structure of (44) are not rare. (44)
I want me to VP
Sentences with this pattern are still less frequent, though, than corresponding sentences with PRO, and one may wonder why. This is best understood as an effect of the "Avoid pronoun principle" discussed in Chomsky (1981). Prom current perspective, this principle has more the character of a descriptive generalization than of a core principle, but irrespective of how it is to be ultimately explained it is independent of binding theory proper. We may conclude, then, that nothing special needs to be said to account for the binding properties of anaphors and pronominals in English. 9
Note that ing-clauses also occur as participial modifiers. In certain environments, such as perception verb complements this leads to an ambiguity where V DPACC V-ing can either be construed as a canonical small clause (which allows passive and wide scope) or as a true ACC-ing. In the context of (41) the two construals can be clearly distinguished, see Reuland (1983). 18
9. Summary The approach to binding enforced by the inclusiveness condition is fruitful and simple. For SELF-movement no special locality conditions are needed, and antilocality effects follow from general properties of computations and conditions on chain formation coupled with economy. Specific binding patterns in natural language follow from the interaction between general properties of computations, general processes of chain formation on which binding has a free ride, and language specific parameters, including operations on argument structure and Case licensing. 10. References Baauw, Sergio. 2000. Grammatical features and the acquisition of reference. A comparative study of Dutch and Spanish. Utrecht: LOT Dissertation series 39. Boeckx, Cedric, Norbert Hornstein and Jairo Nunes. 2007. Overt Copies in Reflexive and Control Structures: A Movement Analysis. In: University of Maryland Working Papers in Linguistics 15, ed. A. Conroy, C. Jing, C. Nakao and E. Takahashi, 1-46. College Park, MD: UMWPiL. Chomsky, Noam. 1981. Lectures on Government and Binding. Dordrecht: Foris Chomsky, Noam. 1995. The Minimalist Program. Cambridge, Mass. MIT Press Chomsky, Noam. 2001. Derivation by Phase. In Michael Kenstowicz, ed., Ken Hale: a Life in Language. Cambridge, Mass.: MIT Press Chomsky, Noam. 2005. Three Factors in Language Design. Linguistic Inquiry 36.1, 1-22. Chomsky, Noam 2008. On Phases. Foundational Issues in Linguistic Theory: Essays in Honor of Jean-Roger Vergnaud. Freidin, Robert, Carlos Otero and Maria-Luisa Zubizarreta (eds). Cambridge, Mass: MIT Press. pp. 133-166. Cole, Peter, Gabriella Hermon and Yassir Tjung. 2008. A Binding Theory Exempt Anaphor in Javanese. In Reciprocals and Reflexives: Theoretical and Typological Explorations, eds., König, Ekkehard, and Gast, Volker. Berlin: Mouton de Gruyter Conroy, Stacey. 2007. I'm going to do me a talk on Personal Datives …. but not really. Syntax Lunch Talk, February 21, 2007. Everaert, Martin. 1986. The Syntax of Reflexivization. Dordrecht: Foris Farmer, Ann, and Robert Harnish. 1987. Communicative reference with pronouns. In The pragmatic perspective, ed. J. Verschueren and M. Berucelli, 547-565. Amsterrdam: Benjamins Frampton, John. 2004. Copies, Traces, Occurrences, and all that Evidence from Bulgarian multiple wh-phenomena. Ms. Northeastern University, Second draft, September 2004 Gelderen, Elly van. 2000. A History of English Reflexive Pronouns:Person, Self, and Interpretability. Amsterdam: Benjamins Hauser, Marc D., Noam Chomsky, W. Tecumseh Fitch. 2002. The Faculty of Language: What Is It, Who Has It, and How Did It Evolve? Science 298, 1569-1579 Heinat, Fredrick. 2005. Probes, pronouns, and binding in the Minimalist Program. PhD Dissertation. Lund University Hellan, Lars.1988. Anaphora in Norwegian and the Theory of Grammar. Dordrecht: Foris Heim, Irene. 1982. The Semantics of Definite and Indefinite Noun Phrases. Ph.D. Dissertation University of Massachusetts, Amherst. Published in 1989 by Garland. New York. Heim, Irene, and Angelika Kratzer. 1998. Semantics in Generative Grammar. Malden: Blackwell Heine, Bernd and Hiroyuki Miyashita. 2008. The intersection between reflexives and reciprocals: a grammaticalization perspective. In Ekkehard Koenig and Volker Gast, 19
eds. Reciprocals and reflexives: Theoretical and Typological Explorations. Berlin: Mouton, 169-225 Hicks, Glyn. 2009. The Derivation of Anaphoric Relations. Amsterdam: Benjamins Hoekstra, Jarich. 1994. Pronouns and Case: On the distribution of Frisian harren and se 'them'. Leuvense Bijdragen 83, 47-65. Hornstein, N. 2001. Move! A minimalist Theory of Construal. Oxford: Blackwell. Jayaseelan, K.A. 1997. Anaphors as Pronouns. Studia Linguistica 51.2, 186-234 Johnson, Kyle. 1991. Object positions. Natural Language and Linguistic Theory 9:577-636. Kayne, Richard. 1981. On Certain Differences between French and English. Linguistic Inquiry 12, 349-371 Kayne, Richard.1985. Principles of Particle Constructions. In: Grammatical Representation. Jacqueline Guéron, Hans-Georg Obenauer, and Jean-Yves Pollock (eds.), 111–140. Dordrecht: Foris. Kayne, Richard. 2002. Pronouns and their antecedents. In: Samuel Epstein and Daniel Seely, eds., Derivation and Explanation in the Minimalist Program, 133-167. Malden: Blackwell Koornneef, Arnout. Eye-catching Anaphora. Doctoral dissertation. LOT International Series. Utrecht: UiL OTS Koster, Jan. 1985. Reflexives in Dutch. In Jacqueline Guéron, Hans-Georg Obenauer and Jean Yves Pollock (eds.), Grammatical representation. Dordrecht: Foris, 141-168. Marelj, Marijana. 2004. Middles and Argument Structure across Languages. Doctoral dissertation. LOT International Series. Utrecht: UiL OTS May, Robert. 1977. The Grammar of Quantification. Doctoral dissertation. Cambridge, Mass.: MIT Menuzzi, Sergio. 1999. Binding Theory and Pronominal Anaphora in Brazilian Portuguese. HIL dissertation. Leyden. LOT international series 30. The Hague: Holland Academic Graphics. Moyse-Faurie, Claire. 2008. Constructions expressing middle, reflexive and reciprocal situations in some Oceanic languages. In Ekkehard Koenig and Volker Gast, eds. Reciprocals and reflexives: Theoretical and Typological Explorations. Berlin: Mouton, 105-168 Nunes, Jairo. 2004. Linearization of Chains and Sideward Movement. Cambridge, Mass.: MIT Press Pesetsky, David and Esther Torrego. 2004. The Syntax of Valuation and the Interpretability of Features. Ms. MIT and UMass/Boston Postal, Paul M. 1974. On Raising. Cambridge: MIT Press Reinhart, Tanya. 1983. Anaphora and Semantic Interpretation. London: Croom Helm. Reinhart, Tanya. 2000. Strategies of Anaphora Resolution. In Interface Strategies, ed. Hans Bennis, Martin Everaert and Eric Reuland, 295-325. Amsterdam: Royal Academy of Sciences Reinhart, Tanya. 2002. The Theta System - an Overview. Theoretical Linguistics 28(3) Reinhart, Tanya. 2006. Interface strategies: Optimal and costly computations. Cambridge, Mass.: MIT Press Reinhart, Tanya. and Eric Reuland. 1991. Anaphors and Logophors: An Argument Structure Perspective, in Jan Koster and Eric. Reuland (eds.), Long Distance Anaphora, 283-321. Cambridge University Press, Cambridge Reinhart, Tanya and Eric Reuland. 1993. Reflexivity. Linguistic Inquiry 24.4, 657-720 Reinhart, Tanya and Tal Siloni. 2005. Thematic Arity Operations and Parametric Variations. Linguistic Inquiry Reuland, Eric. 1983. Governing –ing. Linguistic Inquiry 14.1, 101-136
20
Reuland, Eric. 2001. Primitives of Binding. Linguistic Inquiry 32.2, 439-492 Reuland, Eric. 2005a. Binding Conditions: How are they Derived? In: Proceedings of the HPSG05 Conference, ed. Stefan Müller. CSLI Publications http://cslipublications.stanford.edu/ Reuland, Eric. 2005b. Agreeing to Bind. In: Organizing Grammar: Linguistic Studies in Honor of Henk van Riemsdijk, ed., Hans Broekhuis, Norbert Corver, Riny Huybregts, Ursula Kleinhenz, and Jan Koster, 505–513. Berlin: Walter de Gruyter Reuland, Eric. 2008. Anaphoric dependencies: How are they encoded? Towards a derivationbased typology. In Reciprocals and Reflexives. Theoretical and Typological Explorations, eds., König, Ekkehard, and Gast, Volker, 502-559. Berlin: Mouton de Gruyter Reuland, Eric. 2010. Minimal versus not so minimal pronouns. In: Martin Everaert, Tom Lentz, Hannah de Mulder, Oystein Nilsen, Arjen Zondervan, eds., The Linguistics Enterprise. Amsterdam: John Benjamins Reuland, Eric. 2011/in press. Anaphora and Language Design. Cambridge, MA: MIT press Reuland, Eric. To appear. Syntax and interpretation systems: How is their labour divided? In Handbook of Minimalism, ed. Cedric Boeckx. Reuland, Eric, and Yoad Winter. 2009. Binding without Identity: Towards a Unified Semantics for Bound and Exempt Anaphors. In Sobha Devi, Antonio Branco and Ruslan Mitkov (eds.) Anaphora Processing and Applications, 69-80. Lecture notes in Artificial Intelligence. Berlin: Springer Runner, Jeffrey T. 2005. The Accusative Plus Infinitive Construction in English. In: Martin Everaert and Henk van Riemsdijk, eds. The Blackwell Companion to Syntax. Oxford: Blackwell Publishing Safir, Kenneth. 2004. The syntax of anaphora. Oxford: Oxford University Press Schladt, Mathias. 2000. The typology and grammatical ization of reflexives. In: Reflexives: Forms and Functions, ed. Zygmunt Frajzyngier and Traci Curl. Amsterdam: Benjamins. Thráinsson, Höskuldur. 1991. Long Distance Reflexives and the Typology of NPs. In Long distance anaphora, ed. Jan Koster and Eric Reuland. 49-75 Cambridge, UK: Cambridge University Press Vasić, Nada, Sergey Avrutin & Esther Ruigendijk. 2006. Interpretation of pronouns in VP ellipsis constructions in Dutch Broca’s and Wernicke’s aphasia. Brain and Language, 96.2, 191-206 Webelhuth, Gert and Clare Dannenberg. 2006. Southern American personal datives: the theoretical significance of dialectal variation. American Speech 81(1): 31-55 Winter, Yoad, and Eric Reuland. 2008. Binding without identity: Reference by Proxy and the Functional Semantics of Pronouns. Ms. UiL OTS Zwart, Jan-Wouter. 2002. Issues Relating to a Derivational Theory of Binding. In: Samuel Epstein and Daniel Seely, eds., Derivation and Explanation in the Minimalist Program, 269-294. Malden: Blackwell.
21