Structure in Language: A Dynamic Perspective (Routledge Studies in Linguistics)


Free download. Book file PDF easily for everyone and every device. You can download and read online Structure in Language: A Dynamic Perspective (Routledge Studies in Linguistics) file PDF Book only if you are registered here. And also you can download or read online all Book PDF file that related with Structure in Language: A Dynamic Perspective (Routledge Studies in Linguistics) book. Happy reading Structure in Language: A Dynamic Perspective (Routledge Studies in Linguistics) Bookeveryone. Download file Free Book PDF Structure in Language: A Dynamic Perspective (Routledge Studies in Linguistics) at Complete PDF Library. This Book have some digital formats such us :paperbook, ebook, kindle, epub, fb2 and another formats. Here is The CompletePDF Book Library. It's free to register here to get Book file PDF Structure in Language: A Dynamic Perspective (Routledge Studies in Linguistics) Pocket Guide.
Associated Data

They share the expressive power to be able to express features such as cross-serial dependencies that human languages possess [ ] and may be efficiently processable and learnable. We are not aware of any formally founded claims about mild context sensitivity in the domain of music or animal songs. Having introduced the extended CH, figure 3 provides a general overview of the locations of main results of structure building in language, music and animal song we discussed in the framework of the extended CH. A Venn diagram of the Chomsky hierarchy of formal languages with three extensions annotated with a comparison of the hypothesized classifications of human languages, human music and animal vocalization.

However, even when considering its extensions and despite its frequent use in recent cognitive debates, the CH may not be suited for providing a good class of cognitive or structural models that capture frequent structures in language, music and animal songs. One aspect stems from the fact that the CH is by its definition fundamentally tied to rewrite rules and the structures that different types of rewrite rules constrained by different restrictions may express. One well-known issue—and an aspect that the notion of mild context-sensitivity addresses—concerns the fact that repetition, repetition under a modification such as musical transposition and cross-serial dependencies constitute types of structures that require quite complex rewrite rules see also the example of context-sensitive rewrite rules expressing cross-serial dependencies in reference [ ].

In contrast, such phenomena are frequent forms of form-building in music [ 25 ] and animal song. This mismatch between the simplicity of repetitive structures and the high CH class it is mapped onto might be one of many motivations to move beyond its confines. The CH has been extensively used in recent cognitive debates on human and animal cognitive capacities, discussing the complexity of theories and processes in various domains and in characterizing different types of structures that may be learnable in artificial grammar learning and implicit learning literatures [ ].

It is important to distinguish here that while the formal automata that accept or generate a class of formal language in the CH may be comparably easy, the inference procedures to learn such structures from examples e. Importantly, the CH concerns a theoretical construct that organizes types of structures according to different forms of rewrite rules and has, being a theory of formal languages in conjunction with idealized formal automata, little immediate connection with cognitive motivations or constraints such as limited memory.

The fact that it defines a set of languages that happen to be organized in mutual superset relations and are well explored in terms of formal automata that produce them does not motivate its reification in terms of mental processes, cognitive constraints or neural correlates. Although the CH has been inspired research in terms of a framework that allowed for the comparison of different models and formal negative arguments against the plausibility of certain formal languages or corresponding computational mechanisms, it does not constitute an inescapable a priori point of reference for all kinds of models of structure building or processing.

Such forms of formal comparison and proofs should inspire future modelling endeavours, yet better forms of structural or cognitive models may involve distinctions orthogonal to the CH and may rather be designed and evaluated in the light of modelling data and its inherent structure as well as possible. What are some different aspects that new models of structure building and corresponding cognitive models should take into account?

In order to model the complexity of ecological real-word structures, they should be able to deal with graded syntactic acceptability [ ] and sequence probability, they should be grounded in considerations of descriptive parsimony, in links to semantics and form-meaning interactions, and they should not only account for production and perception, but also consider learnability and computational complexity e. Finally, formal models should be required to make predictions for empirical structures based on which they may be distinguished on an empirical basis.

There are divergent definitions of these terms. Briefly, they concern the difference whether we can focus on just classes of sets of sequences i. This is relevant, for instance, for distinguishing issue 2 long-distance dependencies, from 3 context-freeness, above. A long-distance dependency in itself is not enough to prove the inadequacy of finite-state models as we stated above ; context-freeness is necessary only when long-distance dependencies can be embedded within unboundedly many other long-distance dependencies.

For instance, when a bird sings songs of the structure AB n C and DB n E , we observe a long-distance dependency between A and C and between D and E , but the songs can be easily modelled with finite-state automata figure 4 by just assuming two different hidden states from which the B s are generated: one for the condition starting with A and ending with C , and one for the other.

This explains why some efforts to empirically demonstrate context-freeness of bird song or music may not be unconvincing from a formal language theory perspective if they are based on just demonstrating a long-distance dependency.

The acceptability judgment of Chinese pseudo-modifiers with and without a sentential context

However, a long-distance dependency does have consequences for the underlying model that can be assumed for its strong generative capacity, representational capacity and compressive power: in the example shown in figure 4 , we were forced to duplicate the state responsible for generating B , in fact we require 2 m states where m is the number of non-local dependency pairs, such as A … C or D … E , that need to be encoded.

Therefore, if there are multiple finite , potentially nested non-local dependencies the number of required states grows exponentially see also the comparable argument regarding the implicit acquisition of such structures in references [ 99 , ]. Note that the finite-state automaton is redundant in the way that it contains multiple instances of the same structure B n. On similar grounds, one may argue that human musical capacities exceed not only Markovian, but probably also a finite-state representation which is not a relevant or probable model here on the grounds just presented , based on empirical evidence that a recent study provided: human non-musicians were found to process non-local syntactic dependencies resulting from one level of centre-embedding in ecological music excerpts [ ].

Comparable findings in animal vocalization are still missing. This example illustrates that the CH as theoretical construct is irrelevant here in choosing the best model. If the intervening material in a long-distance dependency is very variable, even if not technically unbounded, considerations of parsimony, strong-generative capacity, elegant structure-driven compression and considerations of efficiency provide strong reasons to prefer a model other than the minimally required class in the CH or a different type of model altogether.

Further, empirical testability and evaluation, for example in terms of Bayesian model comparison, come to play an important role in this context. As a simple example, consider a hypothetical bird that sings two different songs, A and B , each built up from some repertoire of elements, but showing a typical structure: song A might start with some slow introductory elements, and then continue with faster elements, whereas song B might start with fast, high-pitched elements, and continue with low-pitched ones.

We can also imagine that the bird mainly uses song A in one context, and song B in another context. Other motivations to move beyond the confinements of the CH lie in the modelling of real-world structures that undermine some of the assumptions of the CH.

1. INTRODUCTION

Generally, the aspect that music involves not only multiple parallel streams of voices but also correlated streams of different features and complex timing constitutes a theme that received considerable research in the domain of music cognition, yet does not easily match with the principles that underlie the CH that is based on modelling a single sequence of words.

They also combined predictions derived from the current piece short-term model with predictions derived from a corpus long-term model. Extending this framework, the IDyOM model [ 40 ] includes a search for progressively more information-theoretically efficient representations, which are shown in turn to give rise to progressively better predictors of human expectations. The IDyOM model has been shown to be successful in the domains of music and language [ 56 , ]. Recent modelling approaches generalized the notion of modelling parallel feature streams into dynamic Bayesian networks that combine the advantages of hidden Markov models with modelling feature streams [ 42 , — ].

The original CH is based on a number of assumptions that turn problematic in the light of ecological data. One main problem that is particularly relevant in the domain of music or animal songs is that the notion of grammaticality or wellformedness, which is fundamental for establishing and testing symbolic rules, is much less clear than in language where there also are problems—see [ ] for an insightful discussion.

Most discussions of grammatical structure in music and animal song are based on so-called positive data i. This difference between the linguistic and the musical case may also in part be explained by the fact that at least in Western music , there is large divergence between active and passive musical interaction. Furthermore, negative data are, particularly in the case of animal research, more difficult to obtain and also less clear-cut or potentially graded rather than binary.


  • The Life and Prayers of Saint Therese of Lisieux.
  • Die neuen Wahlkampfstrategien der SPD: Die Bundestagswahlkämpfe 1998 und 2002 (German Edition).
  • (Sorted by year).

This issue motivates a number of changes in the nature of models. Models may be grounded in foundations other than grammaticality, such as optimal prediction or compression e. Another important way to build better models of cognition and deal with the issues above comes from employing syntactic gradience and reintroducing the probabilities that Chomsky abandoned along with his rejection of finite-state models.

Apart from a large number of recent probabilistic models such as the ones mentioned in the previous section that go beyond the framework of the CH, it turns out that a hierarchy of probabilistic grammars can be defined that is analogous to the classical and extended CH and exhibits the same expressive power, with the additional advantage that grammars from this hierarchy can straightforwardly deal with noisy data and frequency effects and lend themselves to information theoretic methodologies such as model comparison, compression or minimum description length [ , ].

Hidden Markov models constitute one type of model that has been very successful in all domains of language, music and animal songs. Comprehensively reviewed by Rabiner [ ], the HMM assumes a number of underlying hidden states each of which emits surface symbols from given probabilistic emission vectors, a Markov matrix defining transition probabilities between states including remaining in the same state and a probability vector modelling the start state.

HMMs have been very successful in modelling language, music and animal songs [ 42 , 49 , 59 , , ].

Correction

Thanks to the probabilities, we can talk about degrees of fit, and thus select models in a Bayesian model comparison paradigm that have the highest posterior probability given the degree of fit and prior beliefs; also, the probabilistic grammar framework does not require wellformedness as a criterion, but rather can use the likelihood of observing particular sentences, songs or musical structure as a criterion [ 8 ].

The CH of formal grammars has its limitations, but has played a major role in generating hypotheses to test, not only on natural language, but also on animal songs and music. But where does this leave semantics? Berwick et al. The reason why this is so is that in natural language, the transfinite-state structure is not some idiosyncratic feature of the word streams we produce, but something that plays a key role in mediating between thought the conceptual—intentional system in Chomsky's terms and sound the phonetic articulatory—perceptual system.

Crucially, the conceptual—intentional system is also a hierarchical, combinatorial system most often modelled using some variety of symbolic logic.

Morphology: Discreteness

From that perspective, grammars from the extended CH describe only one half of the system; a full description of natural language would involve a transducer that maps meanings to forms and vice versa [ 49 , ]. Depending on the type of interaction we allow between syntax and semantics, there might or might not be consequences for the set of grammatical sentences that a grammar allows if we extend the grammar with semantics.

https://lumworkmitpy.gq

New from Cambridge University Press!

But, the extension is, in any case, relevant for assessing the adequacy of the combined model—for example we can ask whether a particular grammar supports the required semantic analysis—as well as for determining the likelihood of sentences and alternative analyses of a sentence. Do we need transducers to model structure building in animal songs and music? There have been debates about forms of musical meaning and its neurocognitive correlates.

However, a large number of researchers in the field agree that music may feature simple forms of associative meaning and connotations as well as illocutionary forms of expression, but lacks kinds of more complex forms of combinatorial semantics see the discussion of references [ — ]. However, it is possible, as mentioned above, to conceive of complex forms of musical tension that involve nested patterns of expectancy and prolongation as an abstract secondary structure that motivates syntactic structures at least in Western tonal music and in analogy would require characterizing a transducer mapping syntactic structure and corresponding structures of musical tension in future research.

There have similarly been debates about the semantic content of animal communication. There are a few reported cases of potential compositional semantics in animal communication cf. For all animal vocalizations that have non-trivial structure, such as the songs of nightingales [ ], blackbirds [ 8 , ], pied butcherbirds [ ] or humpback whales [ , ], it is commonly assumed that there is no combinatorial semantics underlying it.

However, it is important to note that the ubiquitous claim that animal songs do not have combinatorial, semantic content is actually based on little to no experimental data. As long as the necessary experiments are not designed and performed, the absence of evidence of semantic content should not be taken as evidence of absence.

Scientific modelling in generative grammar and the dynamic turn in syntax

If animal songs do indeed lack semanticity, they would be more analogous to human music than to human language. The analogy to music would then not primarily be based on the surface similarity to music on the level of the communicative medium use of pitch, timbre, rhythm or dynamics , but on functional considerations such that they do not constitute a medium to convey types of propositional semantics or simpler forms of meaning, but are instances of comparably free play with form and displays of creativity see below and Wiggins et al.

Does this view on music—animal song analogies have any relevance for the study of language? Drawing a strict dichotomy between music and language may further be a strongly anthropomorphic distinction that may have little match in animal communication. Animal vocalizations may be motivated by forms of meaning that are not necessarily comparable with combinatorial semantics , for example, expressing aggression or submission, warning of predators, group cohesion, social contagion or may constitute free play of form for display of creativity, for instance but not necessarily in the context of reproduction.

Given that structure and structure building moving from the language end to the music end is less constrained by semantic forms, more richness of structural play and creativity is expected to occur on the musical side [ ].


  • Una proposta inattesa (Italian Edition);
  • Language Contact And The Origins Of The Germanic Languages.
  • Cognitive linguistics.
  • Recent SFL Books.
  • PLAN AN INTERNATIONAL WINE & CHEESE PARTY (Plan Like A Chef Book 67).

A final move to a new class of successful models originates in a far research extension of the CH framework where the categorical symbols used in rewrite grammars are replaced by vectors. In standard symbolic as well as probabilistic grammars, this is normally modelled by having two entries in the lexicon, one with category ADJ, and the other with category PREP.

In computational linguistics, vector grammars which have a close relation to neural networks models of linguistic structure from the s [ , ] are experiencing a new wave of excitement following some successes with learning such grammars from data for practical natural language processing tasks [ — ]. While vector grammars have, to the best of our knowledge, not been applied yet to music and animal vocalizations, we expect that they offer much potential in these fields.

Structure in Language: A Dynamic Perspective (Routledge Studies in Linguistics) Structure in Language: A Dynamic Perspective (Routledge Studies in Linguistics)
Structure in Language: A Dynamic Perspective (Routledge Studies in Linguistics) Structure in Language: A Dynamic Perspective (Routledge Studies in Linguistics)
Structure in Language: A Dynamic Perspective (Routledge Studies in Linguistics) Structure in Language: A Dynamic Perspective (Routledge Studies in Linguistics)
Structure in Language: A Dynamic Perspective (Routledge Studies in Linguistics) Structure in Language: A Dynamic Perspective (Routledge Studies in Linguistics)
Structure in Language: A Dynamic Perspective (Routledge Studies in Linguistics) Structure in Language: A Dynamic Perspective (Routledge Studies in Linguistics)

Related Structure in Language: A Dynamic Perspective (Routledge Studies in Linguistics)



Copyright 2019 - All Right Reserved