Scott DeLancey

LSA Summer Institute, Santa Barbara, 2001

Lecture 1

 

                       On Functionalism

 

Our subject matter is "functional syntax".  This is from the outset something of a misnomer, since one of the hallmarks of functionalism is its refusal to recognize strict theoretical or methodological boundaries among syntax and the explanatory realms of semantics, pragmatics, and discourse, or for that matter among synchronic, diachronic, phylo- and ontogenetic analysis and explanation.  That is, there is no such thing as "functionalist syntax" in the sense that there is "generative syntax", since a generativist assumes ex hypothesi that there is a distinct syntactic component in Universal Grammar for "syntax" to be the study of.

     Still, we all recognize that one of the hallmarks of human language is the ability to combine symbolically-meaningful signs into more complex structures.  Many clever mammals, and apparently a few birds, are able to learn a substantial number of words, and even use them--but, with the marginal exceptions of chimpanzees in the "ape language" experiments, only one at a time.  This uniquely human behavior is what we call morphosyntax, and whether or not it forms a unitary and legitimately discrete theoretical domain, it does form a roughly definable field of inquiry.

     Morphosyntax is indeed a wonderful, and wonderfully complex, phenomenon.  But the true mystery, and the true locus of explanation for most of the fundamental facts of syntax, is in what it is expressing.  We lightly debate whether or not language is "primarily" for communication, without touching on exactly what linguistic "communication" entails.  Human language is not simply a device for presenting and pointing to interesting objects and events in the world.  It is a set of tools for communicating our experience, and its structure is fundamentally informed by the structure of our experience and our cultural models of experience.  Languages, for example, tend to afford distinct treatment of some kind to expressions of individual internal experience ("experiencer subject" predicates of emotion and cognition, internal states such as hunger, etc.), which are treated differently grammatically from predicates describing events typically known through perceptual data from the outside world.

     The purpose of this course will be to demonstrate functionalist explanations of some of the phenomena which constitute the subject matter of theories of core syntax.  I will present a sequence of interwoven accounts of aspects of clause structure from the inside out, and some illustrations of the issues in clause combining phenomena.  Grammaticalization will be a central theme, and the outlines of grammaticalization theory will be presented in Lecture 3.  With that as a basis, I will then present an explanatory account of what we know about language, from the ground up.  Obviously this is too large a task for the available time, and we will have to limit our scope in both breadth and depth--there are limits to how far up from the ground we can get, and to how many grammatical phenomena we can deal with.  But I hope to give you a sense of how much of linguistic structure can be explained without recourse to untestable hypotheses about neural structure.

 

 

1.0  Functionalism on the Linguistic Map

 

The term "functional" has been attached to a variety of different models, schools, movements, and methodologies, in and outside of linguistics.  I am using it to refer specifically to the movement which grew out of the work of a group of linguists mostly centered in California in the 1970's, including Talmy Givón, Charles Li, Sandra Thompson, Wallace Chafe, Paul Hopper, and others.  This grouping has also been referred to as "functional/typological linguistics" or, informally, "West Coast" (Noonan 1999) or "California" functionalism, though these last terms are by now anachronistic, as there are prominent researchers closely identified with the functional movement around the world.

     Even within this narrowed application of the term, there is certainly no monolithic "functional theory" shared by all those who would identify themselves as part of or allied with the functional movement.  Givón (1984), Hopper and Thompson (1984), and Langacker (1987), for example, present very different (though not entirely incompatible) accounts of lexical categories, and the "emergent grammar" of Hopper (1987, 1991) gives a very different picture of syntax from, say, Givón 1995.  What all functionalists have in common is a rejection of the notion of formalism as explanation.  The basic difference between functionalist and formalist linguistic frameworks is in where explanations are lodged, and what counts as an explanation. Formal linguistics generates explanations out of structure‑‑so that a structural category or relation, such as command or Subjacency (see e.g. Newmeyer 1999:476-7) can legitimately count as an explanation for certain facts about various syntactic structures and constructions.  Most contemporary formal theories, certainly Generative Grammar in all its manifestations, provide ontological grounding for these explanations in a hypothesized, but unexplored and unexplained, biologically-based universal language faculty.

     Functionalists, in contrast, find explanations in function,

and in recurrent diachronic processes which are for the most part function-driven.  That is, they see language as a tool, or, better, a set of tools, whose forms are adapted to their functions, and thus can be explained only in terms of those functions.  Formal principles can be no more than generalizations over data, so that most Generative "explanation" seems to functionalists to proceed on the dormitive principle.[1]  Functionalism in this sense overlaps tremendously with--and in a real sense, subsumes--allied schools such as Cognitive Grammar and the "Constructivist" school in Europe (e.g. Schulze 1998).

     Modern functionalism is, in important ways, a return to the conception of the field of those linguists who founded the linguistic approach to synchronic, as well as diachronic, phenomena in the late 19th century (see Whitney 1897, von der Gabelentz (1891), Paul (1886), inter alia).  These scholars understood that linguistic structure must be explained in terms of functional, cognitive, "psychological" imperatives:

 

     Language, then, signifies rather certain instrumentalities whereby men consciously and with intention represent their thought, to the end, chiefly, of making it known to other men; it is expression for the sake of communication.  (Whitney 1897:1)

 

They also understood that any language is a product of history, and that synchronic structure is significantly informed by diachronic forces.  They looked to functional motivation for the basis of linguistic structure, and to motivation and recurrent patterns of diachronic change for explanations of cross-linguistic similarities of structure.  In this respect modern functionalism is a return to our roots after a nearly century-long structuralist (or, in Huck and Goldsmith's (1995) useful term, "distributionalist") interregnum.

     The roots of contemporary mainstream linguistics, in contrast, go back only to the Structuralists who, in keeping with the intellectual tenor of an era noteworthy for the ascendancy of behaviorism in psychology and of Logical Positivism in philosophy, banished all notion of explanation from the field, letting the structure simply be.  (See, for example, the resolute empiricism of Hockett 1966).  This left them without any avenue for explaining cross-linguistic similarities, but this was an endeavor which most American Structuralists had little interest in.  Note, for example, how Schmidt's (1926) and Tesnière's (1959) documentation of extensive cross-linguistic correlations in word order patterns aroused virtually no interest in American linguistics, whereas within a decade of Greenberg's (1963) rediscovery of the phenomenon it had launched the small but vigorous typological movement which is the direct intellectual and sociological foundation of contemporary functionalism.[2]

     The "Generative Revolution" which began with Syntactic Structures is generally presented as a reaction to this Structuralist agnosticism, a re-introduction of the notion of explanation in the science of language.  Unfortunately, the Generativists inherited from their Structuralist forbears a deep distrust of "external" explanation.  They resolved the problem by positing language-internal "explanations" for linguistic consistency.  And to all appearances many contemporary theoreticians continue to believe that they can have their cake and eat it too, to have an autonomous theory of linguistics which explains structure without itself needing explanation.  Functionalism in this respect is the true revolution--or, better, counter-revolution, as it constitutes a return to a concept of explanation which has been ignored since the Bloomfieldian Ascendancy.

 

 

2.0 Functionalist Metatheory

 

Defining a body of opinion and research like Functionalism requires both a theoretical and a sociological dimension.  For Functional linguistics, like Generative linguistics, or Minimalist syntax, or what have you, refers both to a set of intellectual positions which define the school, and to a group of scholars who adhere (to whatever degree) to it.  Although they represent two different, though overlapping, social groups, there is no sharp break in theory or practice between the Functional and Cognitive movements in contemporary linguistics.  The difference between the two schools, like, say, the difference between Autosegmental and Metrical Phonology, has to do with the particular problems which their members find to be of the most immediate interest, rather than any fundamental difference in their approaches to explanation in linguistic theory, and the approaches of, on the one hand, Cognitivists like Langacker, Lakoff, Fauconnier, or Goldberg, and on the other functionalists like Givón, Hopper, Heine, or Bybee, clearly complement one another.

     I do not intend to present this course in the format of "the Functionalist alternative to mainstream syntax".  That is, I won't spend too much time explaining at every juncture how and why our analysis is better than someone else's.  In the first place, sometimes it's not--it may even be effectively the same analysis, couched in different vocabulary.  (Usually, though, the terminological differences are the key to fundamental differences in the theoretical framework within which the analysis is placed).  More to the point, I can't consistently address formalist alternatives to what I'm proposing, because I can't keep track of what they are.  It's hard to hit a moving target ...  But most of all, because Functionalism is not simply a reaction to someone else's theory--it is a framework for thinking about and explaining linguistic structure and behavior, and, like any coherent framework, makes sense best when it is presented in its own terms.

 

 

2.1  Structural and Functional definitions

 

Linguistic categories can be defined in different ways.  We will return to this in more detail in the next lecture; here I simply want to introduce one of the basic aspects of Functional analysis.  Consider the concept Noun.  Start with the traditional notional definition: '(word whose reference is) a person, place, or thing'.  The basic problem with this is that it is not operationalizable.  It cannot reliably tell us whether a given concept will be a noun or a verb, since many concepts can occur as both:  the classic example is fire and burn; similar problems arise with English zero-derivation pairs like fight/fight.  Moreover, a notional definition can't even explain all nouns post hoc.  English honesty, for example, is clearly a noun.  It does not refer to a person or place, so it must qualify as a noun by referring to a thing.  But by what criterion is HONESTY a thing?  We are left chasing a circle: it must be a thing, because it is labelled by a noun, honesty--and honesty is a noun because it labels a thing, HONESTY.  (We will have occasion to deal further with this sort of circularity, which is more apparent than real.  The perceived circularity inheres in the folk-theoretic conception of language as an autonomous system into which meanings can be put.  On this view, either "Nounness" or "thingness" must be basic, and the other then must be defined in terms of it.  A better conception of what we are looking at here starts from the premise that language is simply the overt expression of cognitive structure.  Then THING is, indeed, a basic conceptual category, but Noun is not defined in terms of THING, it is simply the linguistic manifestation of THING).

     So Structuralists insist rigorously structural definitions.  A noun is a word which fits into noun slots, pure and simple.  This is operationalizable--to decide whether a word is a noun or not, try and make it the subject of a clause, and see what happens.  But this is unsatisfactory in three crucial respects.  First, as the Structuralists were well aware, it makes it impossible to equate word classes across languages.  And more critically, it offers no explanation for why there should be such a thing as a "noun slot", and why any particular word should fit into that slot rather than some other.  However much it might outrage the positivistic assumptions of the likes of Bloch, there is no evading the clear intuition that we all--linguists and non-linguists alike--have that there is some notional basis at least to major categories like noun, verb, adjective, and adposition.

 

 

2.2  Formal and functional explanation

 

Consider the fact that in a wide range of languages, across various language types, we find a construction in which a constituent occurs in sentence-initial position, which ordinarily would occur elsewhere in the sentence, and that in language after language, this construction is used when the constituent is a contrastive or resumptive topic, as in these examples from Thai and English (both basically SVO languages):

 

     1)   khon   naân maj ruucak

          person that NEG recognize

          'I don't know that guy.'

 

     2)   Costello I'd hire in a minute.

 

One kind of account "explains" this fact by saying that there is a syntactic position in underlying structure at the beginning of a sentence.  If a constituent is to be moved, it can only be moved to a syntactic position, so there it goes.  This is a formal explanation: it follows the notion of explanation according to which a phenomenon is explained if it can be given a place in a formal theory of language, i.e. if the theory "can explain it on the basis of some empirical assumptions about the form of language" (Chomsky 1965:26)[3].  But this is, once again, explanation by the dormitive principle:  essentially, constituents get moved to initial position because, when they get moved, that's where they end up.

     To a functionalist, such an account cannot, in principle, be an explanation.  It is simply a statement of the data.  The choice of vocabulary in which such a statement is made cannot constitute an explanation.  Moreover, it fails to explain the apparent correlation between left-dislocation on the one hand and topicality and contrastiveness on the other.  We do not, for example, find languages where contrastive constituents are moved to sentence-second position, though this is also a syntactically-defined position (cf. Newmeyer 1993:102-3).

     A legitimate explanation for the typological facts here must offer an account which provides a principled reason for the association of topic function with initial position--otherwise it is not an explanation, merely a description.  And at least the basis of such an explanation is not far to seek.  It is a well-known and long established fact in psychology that the first in a series--any kind of series, in any modality--has a perceptually privileged position (Gernsbacher and Hargreaves 1988, 1992).  This fact by itself is obviously not an explanation for any syntactic facts, but combined with an adequate understanding of topicality and of sentence construction and interpretation (see e.g. Gernsbacher 1990; we will return to this question in later lectures) it offers the possibility of a truly explanatory account.

     For another example, consider the so-called "Unaccusative Hypothesis".  In a significant number of languages the single arguments of monovalent verbs fall into two classes in terms of some morphosyntactic behavior by which some of them act like transitive subjects, others like transitive objects (see Mithun 1991; we will discuss some of these data more fully in a later lecture).  In a fair number of languages, indeed, they are explicitly coded like transitive subjects or objects by surface case marking or indexation in the verb; a well-known example is Lakhota:

 

     3)   wa-kte    'I kill him'

          wa-ñiwa~  'I swim'

 

     4)   ma-kte    'he killed me'

          ma-t'a    'I die'

 

These are the data, this is what must be explained.  So what explanation does the Unaccusative "Hypothesis" offer?  Why, that some arguments of intransitives are subjects, and some are "underlyingly" objects:

 

     there are two classes of intransitive verbs, the unaccusative verbs and the unergative verbs, each associated with a different underlying syntactic configuration ... an unergative verb takes a D-Structure subject and no object, whereas an unaccusative verb takes a D-structure object ... and no subject ... Alternatively, in argument structure terms, an unergative verb has an external argument but no direct internal argument, whereas an unaccusative verb has a direct internal argument but no external argument.  (Levin and Rappaport Hovav 1995:2-3, emphasis original)

 

Clearly this explanation is entirely theory-bound.  In order for it to even make sense, we have to first believe that there simply are subjects and objects.  Then the claim that "there are two classes of intransitive verbs" is not a hypothesis at all, but an empirical fact--it may or may not be true (or have any morphosyntactic repercussions) in English or Chinese, but in Lakhota, Pomo, Guarani, Acehnese, Lhasa Tibetan, Italian, Dutch, etc., it is an inescapable empirical fact that there are indeed two classes of intransitive verb.  Now, "each associated with a different underlying syntactic configuration" is already a theory-bound formulation; it is meaningful only in terms of some interpretation of the phrase "underlying syntactic configuration".  L and R-H give us some possible interpretations of this:  taking a "D-structure subject and no object" vs. "A D-structure object and no subject", or an "external" but no "internal" vs. "internal" but no "external" argument.

     But these formulations are meaningless without a framework within which notions like "D-structure subject" and "external argument" are defined.  From a functionalist perspective, notions like these cannot have any explanatory value.  We know that in Lakhota, or Lhasa Tibetan, the subject (I am deliberately not saying "surface subject"--that's the only "subject" there is) of 'jump' or 'swim' is marked like a transitive subject, and that of 'die' like a transitive object.  A functionalist approach requires us to assume as a first hypothesis that this happens for a functional reason--that in some cognitive or communicative respect the subject of 'jump' is more like the subject than like the object of 'kill', and that of 'die' more like the object than like the subject.  All that formulations like L & R-H's tell us is that they are treated syntactically alike, and that is nothing more than we know from the start.  An explanatory account must explain why they should be alike--why is a jumper more like a killer than a killee, why is a dier more like a killee than a killer?  Put otherwise, we need some story about how you get to be an unaccusative or an unergative verb in this world.

     Thus stated, part of the answer is obvious.  How is the subject of 'die' like the object of 'kill'?  Well, because they both die, obviously.  And, similarly, how is a jumper like a killer?  Well, they both do something, cause something to happen in the world.  Now, this begins to sound explanatory.  If you want to develop an explanatory theory, this is what needs to be developed.  If you want a formalized system, this is what you need to try and formalize.

     Relational Grammar accounts for these patterns in terms of a priori categories, and thus says nothing concrete beyond that the argument of some intransitive verbs is more subject-like, and of others more object-like--in other words, is nothing more than a restatement of the empirical facts--unless we buy the idea that "1", "2", "3", and the associated theory of clause structure are wired into the cortex, or in some other way determined by the structure of the human organism.  But even that is only a preliminary "explanation"--somebody has to come up with a hypothesis as to how (and why) such a thing could have gotten wired in.

     Now, the RG account does make one interesting claim/prediction--a universal maximum of 3 terms per clause.  Again, in a sense this "prediction" is merely a restatement of the typological facts, and the "terms" of Tesnière and his Relational successors are simply a generalization over certain data.  But the theory does make an explicit prediction that this typological tendency is universal and exceptionless.  And indeed it seems to be.  So this is a prediction which any alternative account should be eventually able to match.  But still, even with a prediction, there is no explanation here.  The terms are "primes" of the theory, but this is not physics or geometry, and we're not entitled to primes, any more than, say, biologists are.  Before the theory is anything interesting, it owes us some story about where these "primes" come from, and why the magic number 3?

     In effect, the "Unaccusative Hypothesis" is nothing but a less explicit statement of the facts.  It accomplishes nothing except to situate the problem within a presupposed theoretical framework.  We are still left with the question, why are some arguments Subjects and some Objects?  We want to know what determines the behavior of the argument or arguments of a particular predicate; all that this "hypothesis" does is to label it.  (Perlmutter and Postal's (1984) radical proposal that the phenomenon might possibly be semantics-driven was shot down the instant it saw print--in fact, 50 pages before it saw print (Rosen 1984)).

 

 

2.3  Innateness and Autonomy

 

The so-called "innateness" question is sometimes presented as a basic division between Generative and Functional approaches to language.  This is, however, a significant misrepresentation.  The real issue is not the vaguely-defined notion of "innateness" of language capacity, but the somewhat (though not yet satisfactorily) more precise issue of the autonomy of syntax.  Much of the dismissive rhetoric from both camps fails to disentangle the issues.  One extreme position associated with Generative linguistics is that there is an autonomous language "module" in the brain, and that most basic facts about language are what they are because they are constrained by the structure of this module.  Since the structure of the brain is obviously part of the genetic endowment of the human species, so are the existence and form of the language module.

     Functionalists generally are skeptical of the autonomy hypothesis, which has historically served to short-circuit any attempt to search for functional explanations.  For if language is the way it is because that's what's wired into the brain, then explanations in terms of function are at best otiose, and at worst perverse.  But this represents an egregious violation of parsimony--for if aspects of language can be explained in terms of non-linguistic constructs which are independently needed to explain other aspects of perception and cognition, then there is no reason to hypothesize specifically linguistic structures to account for the same facts.

     Obviously, though, these other constructs must ultimately be grounded in the structure of the brain, and thus are in some sense part of the innate endowment of the human species.  The real issue between generativists and functionalists is not whether there are generalizations about language for which adequate explanation may require reference to innate structures, but rather the extent to which an understanding of language requires reference to neural structures genetically dedicated to language.

     I will in fact appeal at several points during these lectures to psychological constructs which have every appearance of being in some meaningful and specific sense innate--for example, edge effects in the grammar of topic and focus, or figure-ground organization at the root of case theory.  If, by "Universal Grammar", we are simply (metaphorically?) referring to a set of such psychological principles, then we are all on the same page.  But in general usage Universal Grammar means something different, an innate set of specifically linguistic principles.  There is a vicious circularity at this root of Generative theory, since there is no independent evidence for UG beyond the very data which it is supposed to explain.  For our kind of innateness we can find independent extralinguistic support.

     It is entirely possible--indeed, highly probable--that when we thoroughly understand how, for example, Figure-Ground organization structures grammar, it will be clear that our inherited prelinguistic structure has become specially adapted to language.  In other words, it is probable that there are in fact evolutionary developments in the organization of the human brain which represent adaptations specifically to, and for, language.  But there is every reason to believe that these represent small changes in pre-existing cognitive and perceptual structures, and no reason whatever to imagine that any of them, or the sum of all of them, constitute a radically novel, "autonomous" system, or can be usefully thought of as a distinct, coherent "language organ".

 

     Man possesses, as one of his most marked and distinctive characteristics, a faculty or capacity of speech--or, more accurately, various faculties and capacities which lead inevitably to the production of speech:  but the faculties are one thing, and their elaborated products are another and very different one.  So man has a capacity for art, for the invention of instruments, for finding out and applying the resources of mathematics ... but no man is born an artist, an engineer, or a calculist, any more than he is born a speaker.  (Whitney 1897:278-9)

 

     It is self-evident that the ability to learn and use language as humans do is part of the evolutionary inheritance of the species.  But it is far from obvious that this involves dedicated syntax hardware.  To take a simple example:  the tremendous increase in the association cortex in humans (Hebb 1949) is no doubt crucial to our ability to acquire and access the huge, massively interconnected lexicon which is characteristic of human language.  Indeed, quite regardless of the question of whether this adaptation alone is sufficient to explain the human capacity for language (Jerison 1973, Passingham 1982), it is undoubtedly necessary, and its contribution to language must be at least part of the selective advantage which made such an investment in costly cortex adaptive.  But clearly this is not what anyone in the current debate means by "innate language faculty".

     Let us stipulate, so as not to get bogged down in pointless argument, that it is clear that there must be aspects of the human nervous system, which are among those which distinguish it from all other known nervous systems, which allow for language.  It is less clear a priori that any of the structures involved constitute adaptations specific to language, i.e. a discrete "language faculty", but again we may stipulate for the sake of argument that there is good reason to believe that the brain of Homo sapiens is as it is in part because of evolutionary adaptations for and to language.  Let us further stipulate two factors which clearly must be present for the development of language, and which must represent part of the human biological endowment:  the urge to communicate (characteristic, to some degree, of any truly social species) and the "symbolic capacity" described by Deacon (1997).

     Beyond this, it is clear on general grounds of scientific methodology that without specific evidence we must be cautious about how much structure we want to attribute to specifically linguistic neural adaptations.  In practice, this means that whatever we can explain without invoking otherwise unmotivated linguistic structure should be so explained, and that our "language faculty", "LAD", "Universal Grammar", or however we wish to think of it, should be invoked only to explain patterns which cannot be explained using more general, independently motivated principles.  If we assume the hypothesis of innate, dedicated Universal Grammar, this necessarily implies that we are hypothesizing complex neural structures.  By the standard economy argument which is the basis of all science, simpler is better:  the less structure we have to hypothesize here, the better a theory we've got.  And, of course, the less that the theory of Universal Grammar has to account for, the simpler it can be.

     In the Chomskyan tradition, the goal of linguistics is an abstract formal theory of Universal Grammar.  Therefore, the less that the theory has to explain, the better.  This line of reasoning leads inexorably to a research strategy in which we attempt to provide explanations for as much as possible in terms of already established psychological or neurological constructs, trying to identify the irreducible residue that might plausibly reflect hard wired structures.  And this necessarily leads to a research strategy which is, essentially, Functionalism.  If you start from the assumption that nothing about language is innate, if you're wrong, you'll eventually have to face the fact.  As linguists, we are ultimately responsible for explaining everything, and if you're left with an irreducible residue, then you know you have to start thinking that some aspects of the subject matter might just be given. But starting with an old‑fashioned Generative innatist hypothesis, if you're wrong you'll never discover it‑‑your theory would be falsified only if it could be proven that something you assume is innate is actually explainable in other terms, but if you assume that there are no other explanations for your data, you won't look for them, and that is a good way not to find them.

 

 

2.4  The Typological Approach

 

In the early days of the Functionalist movement, attention to typology was one of the defining methodological differences between Generative and Functionalist research.  Orthodox opinion of the time regarded typical typological facts as too superficial to be of any interest; only detailed investigation of the facts of a specific language could cast any light on the depths of linguistic theory.  It is certainly true that deep understanding requires deep analysis, and this must always start with a thorough understanding of the facts of a particular language.  What typology does for us is to help sort out what kinds of data require functional explanation.  Isolated arbitrary facts of a particular language may have many different sorts of explanation, including unique and unrecoverable historical developments.  But patterns of structure, and of structure-function correlation, that repeat themselves throughout the world, must be motivated.  (Typological awareness could have done the equivalent task, at any point in time, for Generative Grammar, of sorting out what kinds of data need to be accounted for at the theoretical level--but Generativists, in general, have tended to want to fold as much structure as possible into theory).

     Constructions can be classified and compared across languages structurally and functionally.  For example, we can look at recurrent structural properties across languages of, say, reduplication--prefixal, suffixal, infixal, full, partial, affecting verbs, nouns, etc.  This is, for the most part, the research program of formalist syntax.  And we can look at recurrent functional properties of reduplication:  plural, distributive, imperfective, persistive, etc.   Or we can start from function, and look cross-linguistically at the various expressions of imperfectivity, of which reduplication would be one of several.

     Of course it is logically possible that there could be no principled relation between structure and function--that we could expect to find equal numbers of languages where reduplication of a verb stem codes imperfectivity or perfectivity, of a noun stem equally likely to code plural or singular.  (If you think this example some kind of self-evident reductio ad absurdum, can you explain why?)  Or, for another example, we might expect to find, among languages with structurally equivalent noun-incorporation constructions, that in some the incorporated form codes partitivity of the object, in others definiteness, while in others still it might be the unmarked transitive construction, with the unincorporated "normal" construction coding some pragmatically marked function.  A basic task of typological exploration has been to determine whether this is the case.  And it clearly is not--we find recurrent structure-function pairings across languages.  Reduplication has a number of possible functions associated with it in different languages, but marking the singular category of nouns is not one of them, while marking plural a common one.  On the other hand, it is imaginable that we might find perfect correlation, i.e. that a given semantic or pragmatic function is always expressed by the same structural means in every language.  But this is notoriously not the case, otherwise there would be no grounds for argument.

     But we DO find that, cross-linguistically, certain structures tend to be used for certain functions, and certain functions to be coded by certain structures.  This inescapably implies that syntax cannot be "autonomous" with respect to function.  Further, typological investigation shows a principled relation between structure and function, most easily seen in the process of grammaticalization.

 

 

3.0  The Form of a Functional Grammar

 

Functional and Generative theory differ on the very conception of the object of study.  For Bloomfield (1926/1957) and his successors, a language is "the totality of utterances that can be made in a speech-community" (Bloomfield 1927/1957:26) or "a set (finite or infinite) of sentences, each finite in length and constructed out of a finite set of elements" (Chomsky 1957:13).[4]  This is not the ordinary language meaning of language, of course; when we speak of "knowing" a language, we mean knowing how to produce appropriate utterances (not necessarily "sentences") at will.  In Generative terms, this knowledge would consist of the grammar, with the "appropriateness" handled by a set of interfaces between the grammar and some undetermined set of extralinguistic cognitive modules.

     For Functionalists, a language is a set of constructions, from morphemes to discourse structures.  A construction is a pairing of form and function (Langacker 1987, Goldberg 1995).  (Construction Grammar is an attempt to formalize one conception of a Functionalist grammar).  These constructions are the tools which speakers use to organize and communicate mental representations, and, as with any tool, their form can only be understood in relation to their function.  But, like any artifact, their form is not completely determined by their function.  Any tool is the product of a particular culture, and reflects the design history, esthetics, and the particular technological needs and wants of the culture and the individual maker.

     Most of the major and many minor constructions in a language are in substantial part functionally (i.e. semantic/pragmatically) motivated.  But note that function has to work with what is there--new tools can only be fashioned out of the materials at hand, which are the product of thousands of generations of language creation and adaptation.  A common misunderstanding (or parody) of the concept of functional explanation arises when it is interpreted against a background of Generative theory, which is conceived of in terms of a pre-determined set of syntactic elements.  If we begin with the assumption that there is a fixed, universal set of functions, and a fixed, universal set of possible structural patterns, then the idea that form follows function implies a theory in which there is an appropriate structure for each predefined function.  In that case, of course, all languages would be pretty much alike.  But there is no predefined set of functions--there are functions which are relevant to all human communities, but it is their universal relevance which makes them universally linguistic, not some imaginary neural representation of them.  And there is no predefined set of structures--again, there are recurrent patterns, found in languages around the world, but they are recurrent because they are effective designs for carrying out recurrent tasks--the fact (to the extent that it is a fact, which we will discuss later) that every language has something that can be called a subject is a fact of the same kind as the fact that every culture has something that could be called a hammer, or some kind of technological fix for starting fires.

     Constructions include all individual morphemes and words, and categories like noun, subject, NP, modal, antipassive, etc., to the extent that they can be structurally justified in a particular language.  Functionalists differ on how many and which of the multitudinous attested categories of languages they are willing to simply assume the existence and universality of--noun and verb are pretty popular; subject has its adherents (Givón 1984, 1997) and detractors (Dryer 1997, Chafe and Mithun 1999).  NP gets a pretty free ride, but is not an article of faith for anyone; VP is problematic (Givón 1995).  In general, though, Functionalists do not subscribe to any doctrine of universal structure; recurrent structures reflect functional rather than structural universals.

     Language is a learned system, a system of learned categories (NP, sentence, etc.)  This is hardly controversial.  We therefore expect the category structure of this system to follow general principles of knowledge representation--e.g. to manifest prototype effects, if this is in fact how cognitive categories behave.  This is extremely controversial, but has to be taken as the null hypothesis--even if linguistic knowledge is actually represented in a different cognitive "module", why would we expect the nature of the representation and of our access to it to be fundamentally different in kind from what goes on in the rest of cognition?

 

 

4.0  Functional Explanation:  Motivation, Routinization, and Diachrony

 

A popular caricature of functionalism depicts it as asserting that a clear synchronic semantic or pragmatic motivation can be found for every fact of every language.  But no such claim belongs as part of Functionalist theory.  Quite the contrary--functionalists, much more than generativists, are at home and comfortable with arbitrariness in grammar, because we can actually explain it.  Since the earliest days of the functionalist movement it has been standard practice to invoke diachronic explanations for certain kinds of linguistic fact (see e.g. Greenberg 1969, 1979, Givón 1971, 1975, 1976, and especially 1979, Li and Thompson 1974, 1977, etc.)  The conspicuous inability of most formal linguists to understand this aspect of functionalism stems from an unwavering, and often unthinking, conviction of the reality and broad relevance of the "learnability" problem.

     Historical linguistics has traditionally worked on the intuitively obvious assumption that morphology always starts out regular and transparent.  Irregularity and opacity arise as sound change obscures old conditioning environments, erasing the motivation for alternations which once upon a time made sense, but over time become unpredictable.  (Essentially the same idea is recognized (synchronically) in the Generative distinction between core and peripheral grammar, which is to say, between the syntactic facts that can be easily motivated in terms of the theory and those which can't, and therefore don't need to be).

     This is exactly our approach to explaining syntax.  In any language, or in any set of typological data, some syntactic facts are clearly functionally motivated--for example, surface case marking in a typical Agent-Patient marking language.  Others lack an evident functional motivation--for example, Greenberg-type cross-categorial ordering correlations.  Formal theory in general makes no serious distinction between these two types of syntactic fact, since it does not recognize function as an explanatory factor.  To functionalists, obviously, both types, and the differences between them, are of fundamental theoretical importance.  In a commonplace parody of functionalism (see e.g. Newmeyer 1983), functionalists are assumed to claim that every syntactic fact must have a functional motivation.  But there are well-known empirical problems with such a claim, and we don't need to take it seriously, even as a straw man.

 

 

Motivation

 

The simplest and most fundamental motivation is simple reference.  That is, the reason why an English speaker wanting to communicate something about a dog--even something so simple as the presence of one--might say dog, is because that's what the word means--its function, simply put, is to refer to the concept 'dog'.  While this example is simple, it is not trivial; it is the starting point for the whole concept of motivation as it is used in Functional linguistics.

     The next step is the idea of motivated association.  If someone wants to talk about a dog barking, among other things, they will tend to put the word referring to the dog and the word referring to bark together.  This can be thought of as a guide to the hearer to try and construct a mental representation in which these two concepts occur together, but its fundamental motivation is probably the fact that the two concepts occur as part of a single representation in the speaker's mind, and the connection is automatically reflected in the arrangement of words as the speaker expresses the thought.  This tendency--"Behagel's Law", as it is sometimes labelled--is the basis of constituency.  If I am trying to get my addressee to conceptualize a big black dog, I will keep the words for 'big', 'black', and 'dog' together, representing their conceptual contiguity.  (This much of a notion of constituency is neatly captured in a simple dependency representation with a prohibition against lines crossing).  Note that there is nothing peculiarly syntactic about this tendency.  I someone is telling a story, or describing something, the narrative or description will group related elements together--people who are unable or don't bother to do this are regarded as incompetent, or even incoherent, narrators or describers.  For that matter, a painter painting a representational scene will represent things in the continguity relations in which they actually occur--or else will change them in order to represent the scene differently than he sees it.  At a fundamental conceptual level, putting the elements of a noun phrase in syntactic contiguity is exactly the same thing.

     Other examples of motivation require that we postulate some motivations or cognitive structures on the part of speakers.  Since Bloomfield, distributionalist theoreticians have found this a dangerous practice, smacking of circularity.  After all, how do we know that speakers have mental representations which include "topics", except for the structural facts which force us to recognize topic as a linguistic category?  And can we then use this construct to "explain" the very facts which motivated us to recognize it?  (Cf. Tomlin 1997)

     Some Functionalist researchers have made considerable efforts to break out of this apparent circularity, by, for example, developing syntax-independent ways of measuring (Givón 1983) or manipulating (Tomlin 1997) topicality.  But for the most part the circularity is more apparent than real.  In many cases, as we will see, the syntactic evidence points to a motivation for which there is ample psychological or, for that matter, common-sense justification.

     The basic function of language is to encode a schematic representation of a mental representation.  (Despite the rather bizarre demurrals which occasionally pop out from the Generative camp, the basic function of such encoding is self-evidently to communicate a representation to other people--but there is no need to pursue that argument at the moment).  The content of a mental representation of a scene/event includes a representation of the scene, presented from a particular perspective, with a particular hierarchy of foci of attention.  Representation and attention represent mechanisms of perception well-studied by psychologists.  Perspective, and the domain of deixis, represent a sort of categorial problem child, being neither entirely perceptual nor cognitive nor social, and to my knowledge has received less systematic attention from psychologists (but see e.g. Bühler 1934, Osgood 1980, von Glaserfeld, Rommetweit, inter alia).  Neither has it been a topic of great interest in late 20th century linguistics, perhaps being regarded as a pragmatic phenomenon of marginal relevance to "core syntax".  But it is of far more than marginal relevance--as we will see, such phenomena as inverse systems and split ergative marking are fundamentally deictic in nature (DeLancey 1981a).  In any case, perspective and point of view are undeniably a basic part of our everyday phenomenological experience, and hardly need extensive justification as functional motivations.  The basic structures of attention and representation are built into the perceptual and cognitive system--so why should we not expect these structures to inform syntax, which is after all a system (or, set of strategies) for encoding representation and attention?

     A discourse--which may be only a single utterance--involves one or more (but typically two or more) interlocutors and takes place at an actual place and time.  It may also have a narrative deictic center distinct from the (which may change in the course of an extended narrative) and a location in space and time in an established shared fictive universe (i.e one presented in terms of the network of culturally-defined models indexed by the language of the discourse)--by default the present shared world of the interlocutors.  Utterances and sentences in the discourse are anchored to the speech situation by tense marking, 1st and 2nd person (Speech Act Participants, or SAPs) pronouns and other grammatical reflections of their deictic centrality (e.g. inverse and split ergative clause structure (DeLancey 1981a, we will discuss some of these data in Lecture 6; for more exotic examples see DeLancey 1992a)[5], lexically deictic 'go'/'come' verbs or grammatical devices for specifying deictic orientation of motion.  Inverse marking and motional deixis may also be used to anchor a sentence to a narrative deictic center and to a fictional universe--i.e. anything other than the culturally sanctioned public interpretation of the shared present.  (A "true" account of a past event takes place in a fictional universe by this definition).

     So a discourse "takes place" in a mental space constructed cooperatively by the interlocutors.  This space has the essential structure of actual space-time, as perceived by a particular viewpoint character.  Within this defined space, a clause represents a single event or described state.  A finite clause presents an event or state, organized like a percept, that is, presented from a specific point of view, with attention focussed on a particular element in the scene, which is thus organized into Figure and Ground, like any other percept--indeed, a sentence has the same kind of nested Figure-Ground structure as a percept.  I intend to show in these lectures that, given a such few psychological constructs--figure/ground, motion, point-of-view, focus of attention, elementary causation à la Michotte--I can give you a lot of syntax.  Yes, there is innate structure, but it is pretty basic, and none of it fundamentally linguistic.

 

 

Routinization

 

Routinization is the genesis of grammar.  In the third lecture we will discuss at much greater length the theory of grammaticalization, which is the diachronic study of the routinization process and its effects.  The basic principle is a simple one, again familiar from many areas of human activity.  An organism faced with carrying out an unfamiliar task must expend significant amounts of cognitive capacity on it, and will not necessarily hit upon the most efficient and streamlined way of carrying it out.  But a task which has to be carried out frequently eventually becomes routinized--it requires little thought, because anything that needs to be figured out about how to do it has been figured out long ago.  If the task is one which must be regularly carried out by many or all people in a particular community, over time the community will develop a set, streamlined way, or a specially designed tool, for doing it.  The set strategy, or the use of the special tool, will then be learned as part of the culture of the community, so that succeeding generations don't have to invent new strategies for dealing with a problem which their ancestors already solved.

     Let us return to our imaginary primordial language scene, and imagine a language builder, with a substantial vocabulary of nouns and verbs (where those come from we will talk about in the next lecture) but no syntax.  She observes someone someone picking up a stick.  For reasons we have already discussed, a language builder wishing to communicate this representation will say the words for 'pick up' and 'stick'.  If the stick-picker is not already a focus of attention of both speaker and hearer, she may also produce a word referring to him (most likely a name), but for now let's just think about "inner" arguments (a topic which we will return to, in a very different guise, in Lecture 4).

     Now let us imagine a more complicated event--someone picks up a stick and uses it to pry the bark off a fallen log.[6]  The most obvious way of expressing this will be to separately describe the two events:  pickup stick and pry bark.  We have already explained why pickup and stick go together, and likewise pry and bark.  Though our primordial language builders may not have hit on word order yet, they will still have this much constituency: this clustering is self-evidently far more natural than any other possibility, e.g. pry stick bark pickup. 

     Now, a fundamental biological fact about human beings is that we are tool users--we use things, like sticks, to accomplish tasks, like prying up bark.  Therefore event clusters like this, in which someone takes a potential instrument in hand and uses it to carry out a task, will be very common in the experience of any human being.  If this constellation of subevents is something which speakers often have reason to want to represent linguistically, then over time the construction pickup N will become routinized as the linguistic device for expressing this category of experience.  In our primordial scenario there's still no other grammar, except for our nascent instrumental construction, so I cannot salt the example with evidence of grammaticalization.  But in actual languages, we know that there is a set of structural changes which typically accompany this kind of routinization--as a verbal construction becomes routinized in this kind of function, it tends to lose its typically verbal behaviors (e.g. agreement, tense/aspect marking and other specifically verbal morphology), turning into a more streamlined tool, more precisely designed for its specific purpose.

     Thus routinization is usually itself motivated--it represents the linguistic instantiation of a behavior universal to humans, and indeed to higher vertebrates.  But some routinization may be more arbitrary than that.  In our primordial scenario, word order has not yet been discovered--the words 'pick up' and 'stick' can presumably occur in either of the possible orders.  (When we come to the study of topicality we will see possible motivations which might affect this choice, but for our present thought experiment let us leave it as arbitrary).  But human beings are creatures of habit and of fashion, and cultures often settle on arbitrary, formulaic ways of performing common tasks.  If, in our community of language-builders, it has become the custom to present propositions such as we are imagining with the argument first, or with the verb first, then they have invented basic word order, by arbitrarily routinizing the choice of order.  As we will see, this can have far-reaching effects on the future development of the language.  Let us suppose that in this community the fashion is verb first.  As pickup becomes routinized in its instrumental function, it will, over time, develop into the functional equivalent of an adposition.  (It cannot develop into a true, structurally-diagnosable adposition until we have some more syntax).  More specifically, it will develop into a preposition, because its position preceding its argument is already fixed.  In many languages, subordinating conjunctions develop from adpositional constructions, so that if this particular language has developed prepositions, we can predict that it is likely, further down the road, to develop clause-initial subordinators.  The opposite choice of argument-verb order, in contrast, would give us postpositions and clause-final subordinators.

 

 

The Origins of Opacity

 

Just as in phonology, diachronic processes frequently obscure the original motivation for a construction.  Consider a simple example.  English has a productive construction of the form V (NP) PP, in which the PP represents its NP as the cause of the state or event, as faint from exhaustion, be laid up with pneumonia, or crack under pressure.  In most instances of this type we can identify a semantic motivation for the choice of preposition.  Undoubtedly the commonest preposition in this function is from, and it is no coincidence that this is also the most semantically transparent.  The use of ablative forms to indicate a causal relationship is crosslinguistically widespread[7] (Anderson 1971, Diehl 1975, DeLancey 1981), and well-attested in both adult and child English (DeLancey 1984, Clark and Carpenter 1989).  Under occurs with a set of nouns literally or metaphorically associated with the idea of weight bearing on something (weight, pressure, strain, etc.).  The simplest concrete physical instance of such a configuration involves a heavy mass on top of something else, which then bears the strain of the weight or, as the case may be, fails to bear it.  This is the concrete basis for metaphorical construals like He broke under interrogation.  Thus under, like from, has a synchronic semantic motivation in this construction.

     There is, however, one prepositional use of this kind which is completely opaque.  We find of used with causal force in the fixed lexical expressions sick and/or tired of, and with die (die of cancer/hunger/embarrassment/a broken heart, etc.)  This usage lacks synchronic motivation; there is nothing in the productive use of of in modern English which predicts or explains it.  However, the documentary history of English provides ample evidence for earlier productive uses of of with explicit causal force.  It occurs in something very like its modern use with die with a much wider range of predicates (all examples from the Oxford English Dictionary):

 

     5)   Ionas was exceadinge glad of the wylde vyne.  (1535)

 

It also occurred productively marking the agents of passive sentences:

 

     6)   That the juice that the ground requires be not sucked out of the sunne.  (1577)

 

     7)   The relatiue is not always gouerned of the verbe that he commeth before.  (1590)

 

     8)   Being warned of God in a dream ... (1611)

 

Both of these are well-attested synchronic uses of from, but this sense of of is no longer a productive part of the language.  Nevertheless, as we see, it persists in a handful of contemporary constructions.

     Thus the explanation for why we use of in die of cancer is of a different kind from the explanation for the use of from in

exhausted from overwork.  The use of from in this construction is motivated, it makes semantic sense.  The use of of in the same function is not synchronically motivated; it does not make semantic sense.  However, when that construction first developed, the semantics of of were different, and its use in this sense was semantically motivated, in exactly the same way that the contemporary use of from is.  Thus we have a case where diachronic change has erased the original motivation for a particular aspect of a construction.


References

 

Anderson, John. l97l. The Grammar of Case:  Towards a Localistic Theory.  Cambridge: Cambridge University Press.

Bloomfield, Leonard. 1926.  A set of postulates for the science of language.  Language 2.153-64.  repr. in M. Joos, ed., Readings in Linguistics I.  Chicago: University of Chicago Press, 1957.

Bühler, Karl. 1934. Sprachtheorie: Die Darstellungsfunktion der Sprache.  Jena: Gustav Fischer.  English translation (1990): Theory of language:  The representational function of language.  Amsterdam: Benjamins.

Chomsky, Noam. 1957.  Syntactic structures.  's-Gravenhenge: Mouton.

     . 1965.  Aspects of the theory of syntax.  Cambridge, MA:  MIT Press.

Clark, Eve, and Kathie Carpenter. 1989. The notion of source in language acquisition.  Lg. 65:1-30.

Curme, George. 1931.  Syntax. New York: Heath.

Darnell, Michael, E. Moravcsik, F. Newmeyer, M. Noonan, and K. Wheatley, eds.  1999.  Functionalism and formalism in linguistics.  (Studies in Language Companion Series No. 41).  Amsterdam: Benjamins.

Deacon, Terrence William. 1997.  The symbolic species: The co‑evolution of language and the brain.  New York: Norton.

DeLancey, Scott.  1981.  An interpretation of split ergativity and related patterns. Language 57.3:626‑57.

     . l984. Notes on agentivity and causation. Studies in Language     8.2.181-213.

     . 1992a. The historical status of the conjunct/disjunct pattern in Tibeto-Burman.  Acta Linguistica Hafniensia 25:39-62.

     . 1992b. Sunwar copulas.  Linguistics of the Tibeto-Burman Area 15:1.31-38.

Dickinson, Connie. 2000.  Mirativity in Tsafiki.  Studies in Language 24:2.379-422.

Diehl, Lon. 1975. Space Case: Some principles and their implica‑

     tions concerning linear order in natural languages. Working Papers of the Summer Institute of Linguistics, University of North Dakota Session 19.93‑150.

Dryer, Matthew. 1997.  Are Grammatical Relations Universal? In J. Bybee et. al., eds. Essays on Language Function and Language Type: Dedicated to T. Givón. pp. 115 ‑ 143. Amsterdam: John Benjamins.

Gabelentz, Georg von der.  1891.  Die Spraswissenschaft:  Ihre Aufgaben, Methoden und bisherigen Ergebnisse.  Leipzig.  (repr. 1984: Tübingen: Narr).

Gernsbacher, M.A., and D. Hargreaves. 1988.  Accessing sentence participants:  The advantage of first mention.  J. of Memory and Language 27.699-717.

     , and      . 1992.  The privilege of primacy:  Experimental data and cognitive explanations.  in D. Payne, ed., Pragmatics of word order flexibility, pp. 83-116.  (Typological Studies in Language 22).  Amsterdam and Philadelphia: Benjamins.

Givón, Talmy.  1971.  Historical syntax and synchronic morphology:  An archaeologist's field trip.  CLS 7.

     . 1975.  Serial verbs and syntactic change:  Niger-Congo.  in C. Li, ed., 1975, pp. 47-112.

     . 1976.  Topic, pronoun, and grammatical agreement.  in C. Li, ed., 1976, pp. 149-88.

     . 1979.  On understanding grammar.  New York: Academic Press.

     . 1995.  Functionalism and grammar.  Amsterdam: Benjamins.

Greenberg, Joseph. 1966. Some universals of grammar with particular reference to the order of mean­ingful elements.  In Greenberg, ed., 1966, pp. 73-113.

     , ed. 1966. Universals of Language.  Cambridge, MA: M.I.T. Press.

Haegeman, Liliane. 1994.  Introduction to Government & Binding Theory, 2nd edition.  Oxford: Blackwell.

Hargreaves, David. 1991.  The grammar of intentional action in the grammar of Kathmandu Newari.  PhD dissertation, University of Oregon.

Hebb, D. O.  1949.  The organization of behavior.  New York: Wiley.

Hockett, Charles. 1966. The problem of universals in language.  pp. 1-29 in Greenberg, ed., 1966.

Hopper, Paul. 1987. Emergent Grammar.  Berkeley Linguistics Society: Proceedings of the Annual Meeting 13, 139-157.

     . 1991. On some principles of grammaticization.  in Traugott & Heine, eds., vol. 1, pp. 17-35.

Huck, Geoffrey, and John Goldsmith. 1995.  Ideology and linguistic theory:  Noam Chomsky and the Deep Structure debates.  London: Routledge.

Jerison, Harry. 1973.  Evolution of the brain and intelligence.  New York: Academic Press.

Jespersen, Otto.  1961.  A modern English grammar on historical principles. London: George Allen.

Li, Charles, ed. 1975.  Word order and word order change.  Austin:  University of Texas Press.

     , ed. 1976.  Subject and topic.  New York: Academic Press.

     , ed. 1977.  Mechanisms of syntactic change.  Austin: University of Texas Press.

     , and S. A. Thompson. 1974. An Explanation of Word Order Change SVO‑>SOV. Foundations of Language 12.201‑214.

     , and      . 1977.  A mechanism for the development of copula morphemes.  In Li, ed., 1977, pp. 419-44.

Mithun, Marianne. 1991.  Active/agentive case marking and its motivations.  Language 67:510-46.

Newmeyer, Frederick. 1983. Grammatical theory:  Its limits and possibilities.  Chicago: University of Chicago Press.

     . 1999.  Some remarks on the Functionalist-Formalist controversy in linguistics.  in M. Darnell, et. al., eds., pp. 469-86.

     . 2000.  The discrete nature of syntactic categories: Against a prototype-based account.  pp. 221-250 in Robert Borsley, ed., Syntax and semantics 32:  The nature and function of syntactic categories.  San Diego: Academic Press.

Noonan, Michael. 1999.  Non-Structuralist syntax.  in M. Darnell, et. al., eds., pp. 11-31.

Osgood, Charles. 1980. Lectures on language performance.  Berlin: Springer-Verlag.

\Passingham, R. E. 1982.  The human primate.  New York: Freeman.

Paul, Hermann. 1886. Prinzipien der Sprachgeschichte.  Halle:  Niemeyer.  (5th edition, 1920)  English translation:  Principles of the history of language, London: Longmans, Green (1891).

Perlmutter, David, and Carol Rosen, eds.  1984.  Studies in Relational Grammar 2.  Chicago: University of Chicago Press.

     , and Paul Postal. 1984.  The 1-Advancement Exclusiveness Law.  in Perlmutter and Rosen, eds., 1984, pp. 81-125.

Quirk, Randolph, S. Greenbaum, G. Leech, and J. Svartvik.  1985.  A comprehensive grammar of the English language.  London: Longman.

Rosen, Carol. 1984.  The interface between semantic roles and grammatical relations.  in Perlmutter and Rosen, eds., 1984, pp.  38-77.

Schulze, Wolfgang.  1998.  Person, Klass, Kongruenz; Fragmente einer Kategorialtypologie des einfachen Satzes in den ostkaukasischen Sprachen.  Munich: LINCOM Europa.

Schmidt, Wilhelm. 1926. Die Sprachfamilien und Sprachenkreise der Erde.  Heidelberg: Carl Win­ter.

Sturtevant, Edgar H. 1947.  An introduction to linguistic science.  New Haven: Yale University Press.

Talmy, Leonard. 1985.  Force dynamics in language and thought. Chicago Linguistics Society Parasession on Agentivity and Causation, pp. 293-333.

Tesnière, Lucien. 1959.  Éléments de syntaxe structurale.  Paris: Klincksieck.

Traugott, Elizabeth, & Bernd Heine, eds. 1991.  Approaches to grammaticalization.  (2 volumes).  (Typological Studies in Language 19).  Amsterdam:  Benjamins.

Visser, F. Th. 1969.  An historical syntax of the English language.  Part three, First half:  Syntactical units with two verbs.  Leiden: Brill.

Whitney, William Dwight. 1897.  The life and growth of language.  New York: Appleton.



    [1]The explanation given in Molière's Le malade imaginaire for why opium induces sleep is that it contains a "dormitive principle".

    [2]Though, in justice to our predecessors, it could well have had something to do with embarrassment at Schmidt's highly speculative ideas about ethnological typology, and his attempts to correlate them with linguistic typology.

    [3]Sic.  The content of the word "empirical" in this sentence has never been clear to me.

    [4]The difference between the two formulations presumably being that for Chomsky, the "set of sentences" will be the totality of sentences generated by the grammar of a particular speaker, for Bloomfield the totality of sentences which would be acceptable to members of the community.

    [5]As well as Hargreaves 1991, Dickinson 2000.

    [6]To expose the grubs underneath.

    [7]Widely-cited examples include the use of German von and Latin ab to mark the agents of passives.