Research articles
 

By Dr. Leonid Perlovsky
Corresponding Author Dr. Leonid Perlovsky
Harvard University, - United States of America
Submitting Author Dr. Leonid Perlovsky
PSYCHOLOGY

Music, Emotions, Neural Mechanisms, Mind, Language, Language Prosody, Culture, Evolution, Knowledge Instinct, Mathematical Models, Cognitive Dissonance

Perlovsky L. Music. Cognitive Function, Origin, And Evolution Of Musical Emotions. WebmedCentral PSYCHOLOGY 2011;2(2):WMC001494
doi: 10.9754/journal.wmc.2011.001494
No
Submitted on: 06 Feb 2011 06:39:20 PM GMT
Published on: 07 Feb 2011 06:54:24 PM GMT

Abstract


Evolutionary musicologists agree that music is an enigma. Existing theories contradict each other, and cannot explain mechanisms or functions of musical emotions in workings of the mind, nor evolutionary reasons for music origins. Based on a synthesis of cognitive science and mathematical models of the mind this paper proposes a hypothesis of a fundamental role of music in cognition, and evolution of the mind, consciousness, and cultures. We consider a split in the vocalizations of proto-humans into two types: one less emotional and more concretely-semantic, evolving into language, and the other preserving emotional connections along with semantic ambiguity, evolving into music. The proposed hypothesis departs from other theories in considering specific mechanisms of the mind-brain, which required the evolution of music parallel with the evolution of cultures and languages. Arguments are presented that the evolution of language toward becoming the semantically powerful tool of today required emancipation from emotional encumbrances. The opposite, no less powerful mechanisms required a compensatory evolution of music toward more differentiated and refined emotionality. Fast differentiation of knowledge due to language created cognitive dissonances among knowledge and instincts. Differentiated emotions were needed for resolving these dissonances. Thus the need for refined music in the process of cultural evolution is grounded in fundamental mechanisms of the mind. This is why today’s human mind and cultures cannot exist without today’s music. The proposed hypothesis gives a basis for future analysis of why different evolutionary paths of languages were paralleled by different evolutionary paths of music. We consider empirical data on parallel evolution of cognition, consciousness, and music during the last three thousand years. Existing data on changes in consciousness and musical styles support the proposed hupothesis. We compare musical emotions to emotions of language prosody, and emotions of cognitive dissonances. Then we propose experimental approaches toward verification of this hypothesis in psychological and neuroimaging research.

Music is a mystery


According to Darwin (1871), music “must be ranked amongst the most mysterious (abilities) with which (man) is endowed.” The suggestion that music and emotions are linked (Juslin & Sloboda 2001) opens more questions than answers: how music expresses or creates emotions, are these emotions similar or different from other emotions, what is their function? “Music is a human cultural universal that serves no obvious adaptive purpose, making its evolution a puzzle for evolutionary biologists” (Masataka, 2008). Kant (1790), who so brilliantly explained the epistemology of the beautiful and the sublime, could not explain music: “(As for) the expansion of the faculties… in the judgment for cognition, music will have the lowest place among (the beautiful arts)… because it merely plays with senses.” Pinker (1997) follows Kant, suggesting that music is an “auditory cheesecake,” a byproduct of natural selection that just happened to “tickle the sensitive spots.” In 2008, Nature published a series of essays on music. Their authors agreed that music is a cross-cultural universal, still “none… has yet been able to answer the fundamental question: why does music have such power over us?” (Editorial, 2008). “We might start by accepting that it is fruitless to try to define ‘music’.” (Ball, 2008). These are just a sampling of quotes from accomplished scientists.
After reviewing selected theories, we present a hypothesis based on arguments from cognitive science and mathematical models of the mind proposing that music serves the most important and concrete function in evolution of the mind and cultures. We elucidate this function, discuss neural mechanisms, and suggest experimental verification of this hypothesis.

Theories of Musical Emotions and Music Origins


Aristotle listed the power of music among the unsolved problems (Aristotle, IV BCE/1995, p.1434). During the last two decades, the powers of music that previously seemed mysterious are receiving scientific foundations due to the research of scientists in several fields. Integration of this research in recent years provides evidence for the evolutionary origins and roles of music. This section provides a selection of views on the role of music in cognition from contemporary research. Current theories of musical emotions attempt to uncover this mystery by looking into its evolutionary origins. Justus and Hustler (2003) and McDermott and Houser (2003) review evidence for evolutionary origins of music. They emphasize that an unambiguous identification of genetic evolution as a source of music origins requires innateness, domain specificity for music, and uniqueness to humans (since no other animals make music in the sense humans do). The conclusions of both reviews are similar, i.e., “humans have an innate drive to make and enjoy music.” There is much suggestive evidence supporting a biological predisposition for music. Certain basic abilities for music are guided by innate constraints.
Nevertheless it is unclear that these constraints are uniquely human since they “show parallels in other domains.” It is likely that many musical abilities are not adaptations for music, but are based on more general-purpose mechanisms. There are “some intriguing clues about innate perceptual biases related to music, but probably not enough to seriously constrain evolutionary hypothesis.” “Available evidence suggests that the innate constraints in music are not specific to that domain, making it unclear, which domain(s) provided the relevant selection pressures.” “There is no compelling reason to argue categorically that music is a cognitive domain that has been shaped by natural selection.” In Nature’s series of essays on music McDermott (2008) writes: “Music is universal, a significant feature of every known culture, and yet does not serve an obvious, uncontroversial function”. Trainor (2008) argues that for higher cognitive functions, such as music, it is difficult to differentiate between adaptation and exaptation (structures originally evolved for other purposes and used today for music), since most such functions involve both “genes and experience.” Therefore the verdict on whether music is an evolutionary adaptation should be decided based on advantages for survival. Fitch (2004) comments that biological and cultural aspects of music are hopelessly entangled, and “the greatest value of an evolutionary perspective may be to provide a theoretical framework.”
In the search for evolutionary origins of music, emphasizes Huron (1999), it is necessary to look for complex multistage adaptations, built on prior adaptations, which might have evolved for several reasons. He discusses social reasons for music origins and lists several possible evolutionary advantages of music: mate selection, social cohesion, the coordination of group work, auditory development, developing auditory skills, refined motor coordination, conflict reduction, preserving stories of tribal origins. However, according to Huron, the list of possible uses of music by itself does not explain musical power over human psyche; does not explain why music and not some other, nonmusical activities have been used for these purposes. Cross (2008a,b), concentrates on evolutionary arguments specific to music. He integrates neuroscientific, cognitive, and ethnomusicological evidence and emphasizes that it is inadequate to consider music as “patterns of sounds” used by individuals for hedonic purposes. Music should be considered in the context of its uses in pre-cultural societies for social structuring, forming bonds, and group identities. According to Cross, evolution of music was based on already existing in animal world biological and genetic mechanisms. The power of language is in “its ability to present semantically decomposable propositions.” Language, because of its concreteness, on one hand enabled exchange of specific and complicated knowledge, but on the other hand could exacerbate oppositions between individual goals and transform an uncertain encounter into a conflict. Music is a communicative tool with opposite properties. It is directed at increasing a sense of ‘shared intentionality.’ Music’s major role is social, it serves as an ‘honest signal’ (that is it “reveals qualities of a signaler to a receiver”) with nonspecific goals. This property of music, “the indeterminacy of meaning or floating intentionality,” allows for individual interactions while maintaining different “goals and meanings” that may conflict. Thus music “promotes the alignment of participants’ sense of goals.”
Cross suggests that music evolved together with language rather than as its precursor. Evolution of language required a re-wiring of neural control over the vocal tract, and this control had to become more voluntary for language. At the same time a less voluntary control, originating in ancient emotional brain regions, had to be maintained for music to continue playing the role of ‘honest signal.’ Related differences in neural controls over the vocal tract between primates and humans were reviewed in Perlovsky (2005, 2006b, 2006e, 2007a, 2010a).
In hominid lineages juvenile periods lengthened. Juvenile animals, especially social primates, engage in play, which prepares them to adult lives. Play involves music-like features, thus proto-musical activity has ancient genetic roots (Cross & Morley 2008). Lengthening of juvenile periods was identified as possibly fundamental for proto-musical activity and for origin of music. Infant directed speech (IDS) has special musical (or proto-musical) qualities that are universal around the globe. This research was reviewed in Trehub (2003). She has demonstrated that IDS exhibits many similar features across different cultures. Young infants are sensitive to musical structures in human voice. Several researchers relate this sensitivity to the “coregulation of affect by parent and child” (Dissanayake, 2000), and consider IDS to be an important evolutionary mechanism of music origin. Yet, arguments presented later tell that IDS cannot be a full story of musical evolution.
Dissanayake (2008) considers music primarily as a behavioral and motivational capacity. Naturally evolving processes led to ritualization of music through formalization, repetition, exaggeration, and elaboration. Ritualization led to arousal and emotion shaping. She emphasizes that such proto-musical behavior has served as a basis for culture-specific inventions of ritual ceremonies for uniting groups as they united mother-infant pairs. She describes structural and functional resemblances between mother-infant interactions, ceremonial rituals, and adult courtship, and relates these to properties of music. All these, she proposes, suggest an evolved “amodal neural propensity in human species to respond—cognitively and emotionally—to dynamic temporal patterns produced by other humans in context of affiliation.” This is why, according to Dissanayake, proto-musical behavior produces such strong emotions, and activates brain areas involved in ancient mechanisms of reward and motivation, the same areas that are involved in satisfaction of most powerful instincts of hunger and sex.

According to Mithen (2007) Neanderthals possibly have had proto-musical ability. He argues that music and language have evolved by differentiation of early proto-human voice sounds “Hmmmm” undifferentiated proto-music-language. The development was facilitated by vertical posture and walking, which required sophisticated sensorimotor control, a sense of rhythm, and possibly ability for dancing. The differentiation of Hmmmm, he dates to after 50,000 BP. Further evolution toward music occurred for religious purposes, which he identifies with supernatural beings. Currently music is not needed, it has been replaced by language, it only exists as inertia, as a difficult to get rid off remnant of the primordial Hmmmm. An exception could be religious practice, where music is needed since we do not know how to communicate with gods. I disagree with dismissing Bach, Beethoven, or Shostakovich in this way; as well as with the implied characterization of religion, and discuss these doubts later. Mithen relates music to emotions due to its presence in original Hmmmm. Songs recombine language and music into original Hmmmm; however Mithen gives no fundamental reason or need for this recombination. While addressing language in details, Mithen (and other scientists as well) give no explanation for why human learn language by about age of five, but the corresponding mastery of cognition takes the rest of lifetime; steps toward explaining this are taken in Perlovsky (2004; 2006d; 2009a,b; 2010a) and summarized later.
Mithen’s view on religion contradicts the documented evidence for relatively late proliferation of supernatural beings in religious practice (Jaynes, 1976), and to mathematical and cognitive explanations for the role of religiously sublime in workings of the mind (Perlovsky, 2006a,d; Levine & Perlovsky, 2008a).
Juslin and Västfjäll (2008) analyze mechanisms of musical emotions. They discuss a number of neural mechanisms involved with emotions and different meanings implied for the word ‘emotion’. I would mention here just two of these. First, consider most often discussed basic emotions; we have specific words for these emotions: fear, sexual-love, jealousy, thirst… Mechanisms of these emotions are related to satisfaction or dissatisfaction of basic instinctual bodily needs such as survival, procreation, a need for water balance in the body… An ability of music to express basic emotions unambiguously is a separate field of study. Second, consider the complex or ‘musical’ emotions (sometimes called ‘continuous’), which we ‘hear’ in music and for which we do not necessarily have special words. Mechanisms and role of these emotions in the mind and cultural evolution are subjects of this paper.
Six different types of music, fulfilling six fundamental needs, and eliciting six basic emotions are considered by Levitin (2008). He suggests that music has originated from animal cries and it functions today essentially in the same way, communicating emotions. It is more difficult, he writes, “to fake sincerity in music than in spoken language.” The reason that music evolved this way as an ‘honest signal’ because it “simply” co-evolved with brains “precisely to preserve this property.” Given the fact that even as simple animals as birds can fake their cries (Lorenz, 1981) I have my doubts about this “simply;” further doubts arise as soon as we think about actors, singers, and poets, not only contemporary professionals, but also those existing in traditional societies (Meyer, Palmer, & Mazo, 1998) since time immemorial. This suggestion, it seems, has not been informed by views of Jung (1921) that some people better manipulate their emotions than their thoughts, or by the current psychological studies on emotional intellect (Mayer, Salovey, Caruso, 2008).
Mathematical modeling of the mechanisms of music perception and musical emotions was considered in (Purwins, Herrera, Grachten, Hazan, Marxer, & Serra, 2008a,b; Coutinho & Cangelosi, 2009). These modeling approaches can be used to obtain and verify predictions of various theories.
In the following sections we discuss mechanisms of music evolution from differentiation of original proto-music-language to its contemporary refined states. Discussions of mechanisms that evolved music from IDS to Bach and Beatles in previously proposed theories are lacking or unconvincing. Why do we need the virtual infinity of “musical emotions” that we hear in music (e.g. in classical Western music)? Dissanayake (2008) suggests that this path went through ceremonial ritualization, due to “a basic motivation to achieve some level of control over events…” However, she does not explain why this motivation is “basic” and where it came from; she does not discuss why humans have these motivation in principally different ways than animals. She continues: if “for five or even ten centuries… music has been emancipated from its two-million year history and its adaptive roots says more about the recency and aberrance of modernity…” Essentially Dissanayake does not to consider what modern people call music. Cross & Morley (2008) appropriately disagree: “…it would be impossible to remove music without removing many of the abilities of social cognition that are fundamental to being human.” They conclude that “there are further facets to the evolutionary story.” This paper discusses a novel hypothesis that clarifies some of these remaining “further facets” and suggests a fundamental role of musical emotions in cognition and evolution.

Fundamental Mechanisms of the Mind


Here we summarize fundamental mechanisms of the mind: concepts, instincts, emotions, and behavior; these serve as a first step toward more complicated mechanisms essential for understanding the role and evolution of music. The content of this section summarizes neuro-cognitive and mathematical arguments considered, in detail, in (Perlovsky 1987; 1994a; 1997; 1998; 2001a, 2006a,d; 2007b,d; 2009c, 2010c; 2010f; Perlovsky & McManus 1991; Perlovsky & Kozma 2007a,b; Perlovsky & Mayorga 2008) and in references therein.
The mind understands the world in terms of concepts. Concepts operate as mental representations-models of objects and situations. This analogy is quite literal, e.g., during visual perception of an object, a concept-model in the mind (memory, representation) projects an image onto the visual cortex, which is matched there to an image projected from the retina. This simplified description is discussed in more details in the above references. Similar neuro-psychological theory of perceptual symbol system describes this process of matching bottom-up and top-down signals as simulator-processes (Barsalou 1999). Experimental neuro-imaging proof of this mechanism with detailed description of the brain regions involved is given in (Bar, Kassam, Ghuman, Boshyan, Schmid, Dale, et al, 2006). Perception occurs when the two images are successfully matched.
The “mechanism of concepts” evolved for instinct satisfaction. Instincts are mechanisms of survival that are much more ancient than mechanisms of concepts. Psychological literature actively discusses mechanisms of instincts and these discussions can be followed in the given references. Here we follow these references in considering the mechanism of instincts as similar to internal sensors that measure vital organism parameters, important for normal functioning and survival. For example, a low sugar level in blood indicates an instinctual need for food. This sensor measurement and the requirement to maintain it within certain limits we consider being an “instinct.”
Emotions designate a number of various mechanisms which are surveyed, for example, in (Cabanac 2002; Juslin & Västfjäll 2008). Here we consider emotions as neural signals connecting instinctual and conceptual brain regions. Emotions (or emotional neural signals) communicate instinctual needs to conceptual recognition-understanding mechanisms of the brain, so that concept-models corresponding to objects or situations that can potentially satisfy instinctual needs receive preferential attention and processing resources in the brain (Grossberg & Levine, 1987; Perlovsky, 2000; 2006d). Thus emotional signals evaluate concepts for the purpose of instinct satisfaction. This evaluation is not according to rules or concepts (like in rule-systems of artificial intelligence), but according to a different instinctual-emotional mechanism described in the given references. Conceptual-emotional understanding of the world results in actions in the outside world or within the mind. We only touch on the behavior of improving understanding and knowledge, the behavior inside the mind directed at improving concepts.
Language and cognitive mechanisms are closely interconnected, yet different. Language and cognition mechanisms are located in different parts of the brain. Language is learned early in life from the surrounding language, where it exists “ready-made.” But cognition requires experience. High cognition cannot be learned from experience alone, language “guidance” is necessary (Cangelosi et al 2007; Fontanari & Perlovsky 2007, 2008a,b; Fontanari et al 2009; Perlovsky 2009a; Perlovsky & Ilin 2009, 2010; Tikhanoff et al 2006). Interaction between cognition and language requires motivation; this motivation is provided by emotionality of language, which resides in language sounds, prosody (Perlovsky 2005, 2006b, 2007a,c, 2009b, 2010d; Guttfreund 1990; Balasko’ & Cabanac 1998; Buchanan et al 2000).
The summarized theory describing conceptual-emotional recognition and understanding encompasses the mechanisms of intuition, imagination, planning, conscious, unconscious, and many others, including aesthetic emotions. Here we have only touch on mechanisms that will be needed later for understanding musical emotions.

Differentiated Knowledge Instinct and Musical Emotions


Here we discuss the main hypothesis of this paper: the fundamental role of musical emotions in evolution of consciousness, cognition, and culture.
The balance between differentiation and synthesis is crucial for the development of cultures and for emergence of contemporary consciousness. Those of our ancestors, who could develop differentiated consciousness, could better understand the surrounding world, and better plan their life had evolutionary advantage, if in addition to differentiation they were able to maintain the unity of self required for concentrating will. Maintaining balance between differentiation and synthesis gave our ancestors evolutionary advantage. Here we examine the mechanisms by which music helps maintaining this balance. The main hypothesis of this paper is that maintaining this balance is the very fundamental role that music plays and the reason for evolution of this otherwise unexplainable ability.
In history there is a long record of advanced civilizations, whose synthesis and ability to concentrate its will was undermined by differentiation. They were destroyed by less developed civilizations (barbarians) who’s differentiation lagged behind, but who’s synthesis and will was strong enough to overcome great powers of their times. These examples include Akkadians overrunning Sumerians some 3 millennia BCE, barbarians overcoming Romans and countless civilizations before and after these events. But I would like to concentrate on less prominent and more important events of everyday individual human survival from our ancestors to our contemporaries. If differentiation undermines synthesis, the purpose and the will to survive, differentiated consciousness and culture would never emerge.
Differentiation is the very essence of cultural evolution, but it threatens synthesis and may destroy the entire purpose of culture, and the culture itself (Perlovsky 2005, 2006b,e, 2009b). This instability is entirely human, it does not exist in the animal kingdom because the pace of evolution and differentiation of knowledge from ameba to primates was very slow, and instinctual mechanisms of synthesis apparently evolved along with the brain capacity. The origin of language changed this; accumulation of differentiated knowledge vastly exceeded biological evolutionary capacity to maintain synthesis. This paper suggests that along with the origin of language another uniquely human ability evolved for maintaining synthesis, the ability for music. The hypothesis in this paper is that music evolved along with language for maintaining the balance between differentiation and synthesis. First we present arguments, than we discuss empirical and experimental means by which this hypothesis can be verified.
Many scientists studying evolution of language came to a conclusion that originally language and music were one (Darwin, 1871; Cross, 2008a; Masataka, 2008). In this original state the fused language-music did not threaten synthesis. Not unlike animal vocalizations, sounds of voice directly affected ancient emotional centers, connected semantic contents of vocalizations to instinctual needs, and to behavior. This way Jaynes (1976) explained stability of great kingdoms of Mesopotamia up to 4,000 years ago. This synthesis was a direct inheritance from animal voicing mechanisms, and to this very day voice affects us emotionally directly through ancient emotional brain centers (Panksepp & Bernatzky, 2002; Trainor, 2008).
We would like to emphasize the already discussed fact that since its origin language evolved in the direction of enhancing conceptual differentiation ability by separating it from ancient emotional and bodily instinctual influences. While language was evolving in this more conceptual and less emotional direction, ‘another part’ of human vocalization evolved toward less semantic and more emotional direction by enhancing already existing mechanisms of voice-emotion-instinct connection. As language was enhancing differentiation and destroyed the primordial unity of psyche, music was reconnecting differentiated psyche, restoring the meaning and purpose of knowledge and making cultural evolution possible.
The fundamental role of music in cultural evolution was maintaining synthesis in the face of increasing differentiation due to language. This was the origin and evolutionary direction of music. We now return to the basic mechanisms of the mind, including KI and analyze them in more details in view of this hypothesis.
In previous sections KI was described as an internal mind’s “sensor” measuring similarity between concept-models and the world and related mechanisms of maximizing this similarity. But clearly it is a great simplification. It is not sufficient for the human mind to maximize an average value of the similarity between all concept-models and all experiences. Adequate functioning requires constant resolution of contradictions between multiple mutually contradicting concepts and between individual concepts quickly created in culture and slowly evolving primordial animal instincts. Human psyche is not as harmonious as psyche of animals. As Nietzsche (1995/1876) put it, “human is a dissonance,” a contradictory beings. Some of our ancestors were able to acquire differentiated contradictory knowledge and still maintain wholeness of psyche necessary for concentration of will and purposeful actions; those had tremendous advantage for survival.
For these reasons KI itself became differentiated. It was directed not only at maximizing the overall harmony, but also at reconciling constantly evolving contradictions, cognitive dissonances. This hypothesis requires theoretical elaboration and experimental confirmation. Emotions related to knowledge are aesthetic emotions subjectively felt as harmony or disharmony. These emotions were differentiated along with KI. Consider high value concepts such as one’s family, religion, or political preferences. These concepts ‘color’ with emotional values many other concepts; and each of these cognitive dissonances requires a different emotion for reconciliation, a different dimension of an emotional space. In other words, a high value concept attaches aesthetic emotions to other concepts. In this way each concept acts as a separate part of KI: evaluates other concepts for their mutual consistency; this is the mechanism of the differentiated knowledge instinct. Virtually every combination of concepts has some degree of contradictions. The number of combinations is practically infinite (Perlovsky, 2006d). Therefore aesthetic emotions that reconcile these contradictions are not just several feelings for which we can assign specific words. There is a virtually uncountable infinity, “almost continuum” of aesthetic emotions, and most likely the dimensionality of this continuum is huge. We feel this continuum of emotions (not just many separate emotions) when listening to music. We feel this continuum in Palestrina, Bach, Beethoven, Mozart, Chaikovsky, Shostakovich, Beatles, and Eminem… (and this mechanism extends to all cultures in the world).
Spinoza (2005/1677) was the first philosopher to discuss the multiplicity of emotions related to knowledge. Each emotion, he wrote, is different depending on which object it is applied to. There is a principled difference between multiplicity of aesthetic emotions and ‘lower’ emotions corresponding to bodily instincts. Those emotions, as discussed, are referred to as ‘basic’ emotions in psychological literature (e.g. see Juslin & Sloboda 2001; Sloboda & Juslin 2001; Juslin & Västfjäll, 2008). As discussed, psychologists identify them; they all have special words, such as ‘rage’ or ‘sadness.’ Levitin (2008) suggests that there are just six basic types of songs, basic emotions related to basic instinctual needs. But Huron (1999) has already argued that this use of music for basic needs is just that, a utilitarian use of music, which evolved for a much more important purpose that cognitive musicologists had not yet been able to identify. Sloboda & Juslin (2001) emphasized that musical emotions are different from other emotions. Emotions related to “mismatch” and “discrepancies” were discussed in (Frijda, 1986; Juslin & Sloboda, 2001). It is proposed here that musical emotions have evolved for synthesis of differentiated consciousness, for reconciling contradictions that every step toward differentiation entails, for reconciling cognitive dissonances, for creating a unity of differentiated Self.

References


1. Akerlof, G.A. & Dickens, W.T. (2005). The economic consequences of cognitive dissonance. In Akerlof, G.A. Explorations in Pragmatic Economics. New York: Oxford University Press
2. Aristotle. (1995). The complete works. (The revised Oxford translation, ed. J. Barnes), Princeton, NJ: Princeton Univ. Press. (Original work VI BCE)
3. Balaskó, M. & Cabanac, M. (1998). Grammatical choice and affective experience in a second-language test. Neuropsychobiology, 37, 205-210.
4. Ball, P. (2008). Facing the music. Nature, 453, 160-162.
5. Bar, M., Kassam, K. S., Ghuman, A. S., Boshyan, J., Schmid, A. M., Dale, et al. (2006). Top-down facilitation of visual recognition. USA: Proceedings of the National Academy of Sciences, 103, 449-54.
6. Buchanan, T. W., Lutz, K., Mirzazade, S. Specht, K., Shah, N.J., Zilles, K. et al. (2000). Recognition of emotional prosody and verbal components of spoken language: an fMRI study. Cognitive Brain Research 9, 227-238.
7. Cabanac, M. (2002).What is emotion? Behavioural Processes, 60, 69-84.
8. Cabanac, M., Fontanari, F., Bonniot-Cabanac, M.-C., & Perlovsky, L.I. (2011). Emotions of Cognitive Dissonance, IEEE proceedings IJCNN 2011, to be published.
9. Cangelosi A., Bugmann G., & Borisyuk R. (Eds.) (2005).  Modeling Language, Cognition and Action: Proceedings of the 9th Neural Computation and Psychology Workshop. Singapore: World Scientific.
10. Coventry K.R, Lynott L., Cangelosi A., Knight L., Joyce D., Richardson D.C. (2009). Spatial language, visual attention, and perceptual simulation. Brain and Language, in press
11. Cangelosi, A,, Greco, A., & Harnad S. (2000). From robotic toil to symbolic theft: grounding transfer from entry-level to higher-level categories. Connect. Sci. 12:143–62.
12. Cangelosi A. & Parisi D.  (Eds.) (2002). Simulating the Evolution of Language. London: Springer.
13. Cangelosi, A. & Riga T. (2006). An embodied model for sensorimotor grounding and grounding transfer: experiments with epigenetic robots. Cogn. Sci. 30:673–89.
14. Cangelosi A, Tikhanoff V., Fontanari J.F., Hourdakis E. (2007). Integrating language and cognition: A cognitive robotics approach. IEEE Computational Intelligence Magazine, 2(3), 65-70
15. Confucius. (551–479 B.C.E./2000). Analects. Tr. D.C. Lau. The Chinese University Press: Hong Kong, China
16. Coutinho, E. & Cangelosi, A. (2009). The use of spatio-temporal connectionist models in psychological studies of musical emotions. Music Perception, 27(1), 1-15.
17. Cross, I. (2008a). The evolutionary nature of musical meaning. Musicae Scientiae, 179-200.
18. Cross, I. (2008b). Musicality and the human capacity for culture. Musicae Scientiae, Special issue, 147-167.
19. Cross, I., & Morley, I. (2008). The evolution of music: theories, definitions and the nature of the evidence. In S. Malloch, & C. Trevarthen (Eds.), Communicative musicality (pp. 61-82). Oxford: Oxford University Press.
20. Darwin, C.R. (1871). The descent of man, and selection in relation to sex. London, GB: John Murray.
21. Davis, P. J., Zhang, S. P., Winkworth A., & Bandler, R. (1996). Neural control of vocalization: respiratory and emotional influences. J Voice,10, 23–38.
22. Deacon, T. (1989). The neural circuitry underlying primate calls and human language. Human Evolution Journal, 4(5), 367-401.
23. Diamond, J. (1997). Guns, germs, and steel: The fates of human societies. New York, NY: W.W. Norton, & Co.
24. Dissanayake, E. (2000). Antecedents of the temporal arts in early mother-infant interactions. In N. Wallin, B. Merker, & S. Brown (Eds.), The origins of music (pp. 389-407). Cambridge, MA: MIT Press.
25. Dissanayake, E. (2008). If music is the food of love, what about survival and reproductive success? Musicae Scientiae Special Issue, 169-195.
26. Editorial. (2008). Bountiful noise. Nature, 453, 134.
27. Festinger, L. (1957). A theory of cognitive dissonance. Stanford, CA: Stanford University Press.
28. Fitch, W. T. (2004). On the biology and evolution of music. Music Perception, 24, 85-88.
29. Fontanari, J.F. & Perlovsky, L.I. (2007). Evolving Compositionality in Evolutionary Language Games. IEEE Transactions on Evolutionary Computations., 11(6), 758-769; on-line doi:10.1109/TEVC.2007.892763
30. Fontanari, J.F. & Perlovsky, L.I. (2008a). How language can help discrimination in the Neural Modeling Fields framework. Neural Networks, 21(2-3), 250–256.
31. Fontanari, J.F. & Perlovsky, L.I. (2008b). A game theoretical approach to the evolution of structured communication codes, Theory in Biosciences, 127, 205-214.
32. Fontanari, J. F., Tikhanoff, V., Cangelosi, A., Ilin, R., & Perlovsky, L.I. (2009). Cross-situational learning of object–word mapping using Neural Modeling Fields. Neural Networks, 22(5-6), 579-585. http://dx.doi.org/10.1007/s12064-008-0024-1.
33. Frijda, N. H. (1986). The emotions. Cambridge: Cambridge University Press.
34. Grossberg, S., & Levine, D. (1987). Neural dynamics of attentionally modulated Pavlovian conditioning: Blocking, interstimulus interval, and secondary reinforcement.  Applied Optics 26, 5015-5030.
35. Guttfreund D. G. (1990). Effects of language usage on the emotional experience of Spanish-English and English-Spanish bilinguals. J Consult Clin Psychol, 58, 604-607.
36. Huron, D. (1999). Ernest Bloch Lectures. Berkeley, CA: University of California Press.
37. Ilin, R. &  Perlovsky, L. (2009). Cognitively Inspired Neural Network for Recognition of Situations. International Journal of Natural Computing Research, in print.
38. Jaynes, J. (1976). The origin of consciousness in the breakdown of the bicameral mind. Boston: Houghton Mifflin Co.
39. Jung. C.G. (1921). Psychological Types. In the Collected Works, v.6, Bollingen Series X. Princeton University Press: Princeton, NJ.
40. Juslin, P. N. & Sloboda, J. A. (2001). Music and emotion: Theory and research. Oxford, GB: Oxford University Press.
41. Juslin, P.N., & Västfjäll, D. (2008) Emotional responses to music: The Need to consider underlying mechanisms. Behavioral and Brain Sciences, in print.
42. Justus, T., & Hustler J. J. (2003). Fundamental issues in the evolutionary psychology of music: Assessing innateness and domain specificity. Music Perception, 23, 1-27.
43. Kant, I. (1790). The critique of judgment. J.H. Bernard (translator). Amherst, NY: Prometheus Books.
44. Lao-Tzu. (6th B.C.E./1979). Tao Te Ching. Tr. D. C. Lau. Penguin Books: New York, NY.
45. Larson, C.R. (1991). Activity of PAG neurons during conditioned vocalization in the macaque monkey. In A. Depaulis, & R. Bandler (Eds). The midbrain periaqueductal gray matter (pp. 23–40). New York, NY: Plenum Press.
46. Levine, D. S., & Perlovsky, L. I. (2008a). Neuroscientific insights on Biblical myths. Simplifying heuristics versus careful thinking: Scientific analysis of millennial spiritual issues. Zygon, Journal of Science and Religion, 43(4), 797-821.
47. Levine, D.S. and Perlovsky, L.I. (2008b). A Network Model of Rational versus Irrational Choices on a Probability Maximization Task. World Congress on Computational Intelligence (WCCI). Hong Kong, China.
48. Levitin, D. J. (2006). This is your brain on music: The science of a human obsession. London: Dutton.
49. Levitin, D. J. (2008). The world in six songs. London: Dutton.
50. Lorenz, K. (1981). The foundations of ethology. New York: Springer Verlag.
51. Luther M. (1538). Preface to Symphoniae jucundae. See W&T, p. 102.
52. Masataka, N. (2008). The origins of language and the evolution of music: A comparative perspective. Physics of Life Reviews, 6 (2009) 11–22.
53. Mayorga, R. & Perlovsky, L.I., Eds. (2008). Sapient Systems. Springer, London, UK.
54. Mayer, J. D., Salovey, P., & Caruso, D. R. (2008). Emotional Intelligence. New Ability or Eclectic Traits? American Psychologist, 63 (6), 503–517.
55. McDermott, J., & Houser, M. (2003). The origins of music: Innateness, uniqueness, and evolution. Music Perception, 23, 29-59.
56. McDermott, J. (2008). The evolution of music. Nature, 453, 287-288.
57. Meyer, R. K., Palmer, C., & Mazo, M. (1998). Affective and coherence responses to Russian laments. Music Perception, 16(1), 135-150.
58. Mithen, S. (2007). The singing Neanderthals: The origins of music, language, mind, and body. Cambridge MA: Harvard University Press.
59. Nietzsche, F. (1876/1997). Untimely Meditations. Tr. R. J. Hollingdale. Cambridge, England: Cambridge University Press.
60. Panksepp, J., & Bernatzky, G. (2002). Emotional sounds and the brain: The neuro-affective foundations of musical appreciation. Behavioural Processes, 60, 133-55.
61. Patel, A. D. (2008). Music, language, and the brain. New York, NY: Oxford Univ. Press.
62. Perlovsky, L.I. (1987). Multiple sensor fusion and neural networks. DARPA Neural Network Study, 1987.
63. Perlovsky, L.I. (1994a). Computational Concepts in Classification: Neural Networks, Statistical Pattern Recognition, and Model Based Vision. Journal of Mathematical Imaging and Vision, 4 (1), pp. 81-110.
64. Perlovsky, L.I. (1997). Physical Concepts of Intellect. Proceedings of Russian Academy of Sciences, 354(3), pp. 320-323
65. Perlovsky, L.I. (1998). Conundrum of Combinatorial Complexity. IEEE Trans. PAMI, 20(6) pp. 666-670.
66. Perlovsky, L.I. (2000). Beauty and mathematical Intellect. Zvezda, 2000(9), 190-201 (Russian).
67. Perlovsky, L. I. (2001a). Neural networks and intellect. New York, NY: Oxford Univ. Press.
68. Perlovsky, L. I. (2001b). Mystery of sublime and mathematics of intelligence. Zvezda, 2001(8), 174-190, St. Petersburg (Russian).
69. Perlovsky, L.I. (2002a). Physical Theory of Information Processing in the Mind: concepts and emotions. SEED On Line Journal, 2002 2(2), pp. 36-54.
70. Perlovsky, L.I. (2002b) Aesthetics and Mathematical Theory of Intellect. Russian Academy of Sciences, Moscow: Iskusstvoznanie, Journal of History and Theory of Art, 2, 558-594 (Russian).
71. Perlovsky, L.I. (2004). Integrating Language and Cognition. IEEE Connections, Feature Article, 2(2), pp. 8-12.
72. Perlovsky, L. I. (2005). Evolution of consciousness and music. Zvezda, 2005(8), 192-223, St. Petersburg (Russian).
73. Perlovsky, L.I. (2006a). Fuzzy Dynamic Logic. New Math. and Natural Computation, 2(1), 43-55.
74. Perlovsky, L.I. (2006b). Music – the first principles. Musical Theater,
75. Perlovsky, L.I. (2006d). Toward physics of the mind: Concepts, emotions, consciousness, and symbols. Physics of Life Reviews, 3(1), 22-55.
76. Perlovsky, L.I. (2006e). Joint evolution of cognition, consciousness, and music. Lectures in Musicology, School of Music, Columbus, OH: University of Ohio.
77. Perlovsky, L.I. (2007a). Evolution of languages, consciousness, and cultures. IEEE Computational Intelligence Magazine, 2(3), 25-39.
78. Perlovsky, L.I. (2007b). Modeling field theory of higher cognitive functions. In A. Loula, R. Gudwin, J. Queiroz (Eds.) Artificial cognition systems. Hershey, PA: Idea Group (pp. 64-105).
79. Perlovsky, L.I. (2007c). Symbols: Integrated cognition and language. In R. Gudwin, J. Queiroz (Eds.). Semiotics and intelligent systems development. Hershey, PA: Idea Group (pp.121-151).
80. Perlovsky, L.I. (2007d). Neural Dynamic Logic of Consciousness: the Knowledge Instinct. In Neurodynamics of Higher-Level Cognition and Consciousness, Eds. Perlovsky, L.I., Kozma, R. Springer Verlag, Heidelberg, Germany.
81. Perlovsky, L.I. (2008a). Music and consciousness. Leonardo, Journal of Arts, Sciences and Technology, 41(4), 420-421.
82. Perlovsky, L.I. (2008b). Sapience, Consciousness, and the Knowledge Instinct. (Prolegomena to a Physical Theory). In Sapient Systems, Eds. Mayorga, R., Perlovsky, L.I., Springer, London.
83. Perlovsky, L.I. (2009a). Language and Cognition. Neural Networks, 22(3), 247-257.
doi:10.1016/j.neunet.2009.03.007.
84. Perlovsky, L.I. (2009b). Language and Emotions: Emotional Sapir-Whorf Hypothesis. Neural Networks, 22(5-6); 518-526.  
85. Perlovsky, L.I. (2009c). ‘Vague-to-Crisp’ Neural Mechanism of Perception. IEEE Trans. Neural Networks, 20(8), 1363-1367.
86. Perlovsky, L.I. (2010a). Musical emotions: Functions, origins, evolution. Physics of Life Reviews, 7(1), 2-27.
87. Perlovsky, L.I. (2010b). Intersections of Mathematical, Cognitive, and Aesthetic Theories of Mind, Psychology of Aesthetics, Creativity, and the Arts, in print.
88. Perlovsky, L.I. (2010c). Neural Mechanisms of the Mind, Aristotle, Zadeh, & fMRI, IEEE Trans. IEEE Trans. Neural Networks, in print.
89. Perlovsky, L.I. (2010d). Joint Acquisition of Language and Cognition; WebmedCentral BRAIN;1(10):WMC00994; http://www.webmedcentral.com/article_view/994
90. Perlovsky, L.I. (2010e). Jihadism and Grammars. Comment to “Lost in Translation,” Wall Street Journal, June 27, http://online.wsj.com/community/leonid-perlovsky/activity.
91. Perlovsky, L.I. (2010f). The Mind is not a Kludge, Skeptic, 15(3), 50-55.
92. Perlovsky, L.I, Bonniot-Cabanac, M.-C., & Cabanac, M. (2010). Curiosity and pleasure, WebmedCentral PSYCHOLOGY, 1(12):WMC001275.
93. Perlovsky, L.I. & Goldwag, A. (2010). The Grammatical Roots of Jihadism: How Cognitive Science Can Help Us Understand the War on Terror, arXiv.
94. Perlovsky, L.I. & Ilin, R. (2010). Neurally and Mathematically Motivated Architecture for Language and Thought. Special Issue "Brain and Language Architectures: Where We are Now?" The Open Neuroimaging Journal, 4, 70-80.
http://www.bentham.org/open/tonij/openaccess2.htm
95. Perlovsky, L. & Ilin, R.  (2010). Computational Foundations for Perceptual Symbol System, submitted for publication.
96. Perlovsky, L.I., Kozma, R., Eds. (2007a). Neurodynamics of Cognition and Consciousness. Springer-Verlag: Heidelberg, Germany.
97. Perlovsky, L., Kozma, R. (2007b). Editorial - Neurodynamics of Cognition and Consciousness, In Neurodynamics of Cognition and Consciousness, Perlovsky, L., Kozma, R., Springer Verlag, Heidelberg, Germany.
98. Perlovsky, L.I. & Mayorga, R. (2008). Preface. In Sapient Systems, Eds. Mayorga, R., Perlovsky, L.I., Springer, London.
99. Perlovsky, L.I. & McManus, M.M. (1991). Maximum Likelihood Neural Networks for Sensor Fusion and Adaptive Classification. Neural Networks 4(1), pp. 89-102
100. Pinker, S. (1997). How the mind works. New York, NY: Norton.
101. Purwins, H., Herrera, P., Grachten, M., Hazan, A., Marxer, R., & Serra, X. (2008a) Computational models of music perception and cognition I: The perceptual and cognitive processing chain. Physics of Life Reviews, 5, 151–168.
102. Purwins, H., Herrera, P., Grachten, M., Hazan, A., Marxer, R., & Serra, X. (2008b) Computational models of music perception and cognition II: Domain-specific music processing. Physics of Life Reviews, 5, 169–182.
103. Schulz, G. M., Varga, M., Jeffires, K., Ludlow, C. L., & Braun, A. R. (2005). Functional neuroanatomy of human vocalization: an H215O PET study. Cerebral Cortex, 15(12), 1835-1847.
104. Seyfarth, R. M., & Cheney, D.L. (2003). Meaning and emotion in animal vocalizations. Ann NY Academy Sci, Dec., 32-55.
105. Sloboda, J. A. & Juslin, P. N. (2001). Psychological perspectives on music and emotion. In P. N. Juslin & J. A. Sloboda, Music and emotion: Theory and research, pp. 71-104. Oxford, GB: Oxford University Press.
106. Spinoza, B. (2005). Ethics. (E .Curley, Translator). New York, NY: Penguin. (Originally published in 1677).
107. Steinbeis, N., Koelsch, S., & Sloboda, J. A.  (2006)  The Role of Harmonic Expectancy Violations in Musical Emotions: Evidence from Subjective, Physiological, and Neural Responses.  Journal of Cognitive Neuroscience. 18,1380-1393.
108. Tikhanoff, V., Fontanari, J. F., Cangelosi, A. & Perlovsky, L. I. (2006). Language and cognition integration through modeling field theory: category formation for symbol grounding. In Book Series in Computer Science, v. 4131, Heidelberg: Springer.
109. Tversky, A. & Kahneman, D. (1974). Judgment under Uncertainty: Heuristics and Biases. Science 185, 1124-1131.
110. Trainor, L. (2004). Innateness, learning, and the difficulty of determining whether music is an evolutionary adaptation. Music Perception, 24, 105-110.
111. Trainor, L. (2008). The neural roots of music. Nature, 453(29), 598-599.
112. Trehub, S. E. (2003). The developmental origins of musicality. Nature Neuroscience, 6(7), 669-673.
113. Weiss, P.& Taruskin, R. (1984). Music in the Western World, Schirmer, Macmillan, New York, NY. p. 15. In sections 2 and 10 citations without references relate to this work, also we use shortened ref. W&T.

Source(s) of Funding


none

Competing Interests


none

Disclaimer


This article has been downloaded from WebmedCentral. With our unique author driven post publication peer review, contents posted on this web portal do not undergo any prepublication peer or editorial review. It is completely the responsibility of the authors to ensure not only scientific and ethical standards of the manuscript but also its grammatical accuracy. Authors must ensure that they obtain all the necessary permissions before submitting any information that requires obtaining a consent or approval from a third party. Authors should also ensure not to submit any information which they do not have the copyright of or of which they have transferred the copyrights to a third party.
Contents on WebmedCentral are purely for biomedical researchers and scientists. They are not meant to cater to the needs of an individual patient. The web portal or any content(s) therein is neither designed to support, nor replace, the relationship that exists between a patient/site visitor and his/her physician. Your use of the WebmedCentral site and its contents is entirely at your own risk. We do not take any responsibility for any harm that you may suffer or inflict on a third person by following the contents of this website.

Reviews
3 reviews posted so far

review
Posted by Mr. Alexander J Ovsich on 11 Dec 2011 12:44:54 PM GMT

Music is essential for cognitive development
Posted by Dr. Roman Ilin on 22 Nov 2011 05:13:59 PM GMT

Comments
0 comments posted so far

Please use this functionality to flag objectionable, inappropriate, inaccurate, and offensive content to WebmedCentral Team and the authors.

 

Author Comments
0 comments posted so far

 

What is article Popularity?

Article popularity is calculated by considering the scores: age of the article
Popularity = (P - 1) / (T + 2)^1.5
Where
P : points is the sum of individual scores, which includes article Views, Downloads, Reviews, Comments and their weightage

Scores   Weightage
Views Points X 1
Download Points X 2
Comment Points X 5
Review Points X 10
Points= sum(Views Points + Download Points + Comment Points + Review Points)
T : time since submission in hours.
P is subtracted by 1 to negate submitter's vote.
Age factor is (time since submission in hours plus two) to the power of 1.5.factor.

How Article Quality Works?

For each article Authors/Readers, Reviewers and WMC Editors can review/rate the articles. These ratings are used to determine Feedback Scores.

In most cases, article receive ratings in the range of 0 to 10. We calculate average of all the ratings and consider it as article quality.

Quality=Average(Authors/Readers Ratings + Reviewers Ratings + WMC Editor Ratings)