Tuesday, September 30, 2014

Alden McCollum: The Alinear Construction of Language

If someone were to say to me “[ð],” I would likely walk away from that interaction confused, and no information would have been conveyed. (This isn’t to dismiss the value of individual phonemes devoid of their contexts, but merely to point out that the type of information conveyed by an isolated phoneme (or allophone) is not the type of meaning that is exchanged in conversation. In general, throughout this post, when I refer to a phoneme as holding no meaning, this is the distinction I’m referring to.)
. . .
The phoneme [ð] on its own does not convey much meaning. Add a schwa and you have [ðə] (the), which is slightly more meaningful. Continue adding phonemes and eventually you might end up with something like [ðə gəɹl ɹʌnz ənəˈbæʃɛdˌli], which conveys an entire idea. Break that sentence down into its component parts, and it begins to lose its meaning. Discrete morphemes such as [li] and [ən] still hold some form of meaning. But continue breaking it down until you have, rather than morphemes, only individual phonemes, and the meaning all but disappears. Neither [ɹ] on its own nor the random fragmented sequence [ɹ], [w], [ʌ] conveys to the listener that there is a girl, or that she is running, or that she is doing so unabashedly. . .
                We have all these theoretically discrete units of language, of varying sizes and qualities. And it’s often tempting to look at them as if there were an entirely linear, additive progression from one to the next – phonemes combine to make morphemes, which in turn combine to make words, which combine to make clauses, then sentences, then paragraphs, and so on. And while in many senses this is true, it in no way tells the whole story; personally I think that viewing language acquisition within this linear construct is limiting and flawed in several ways, two of which I’ll discuss in this post.
                Firstly, language epitomizes the idea that “the whole is greater than the sum of its parts.” Take the phonemes [ʒ] and [ə], for example. On their own, neither holds any interpretable meaning; and yet when you combine the two you end up with the French “je” [ʒə] (meaning “I”), which conveys a tremendous amount of meaning. Those two phonemes, when combined in that particular order, imply the entire concept of a self as distinct from the Other. And it’s not that [ʒ] contains half the concept of self and [ə] the other half; it’s that neither of those phonemes holds any meaning until they are placed together, and, upon combination, that meaning is suddenly created. When discussing language, it’s not as simple as 1 + 2 equals 3. Rather, 1 + 2 equals 3 plus some other, less tangible element which arises as smaller units are combined into larger ones.
 
                The second idea that isn’t quite captured by a systematic, linear view of the process of constructing language relates to how we (humans) learn languages. As an infant, the first type of language you are exposed to is probably comprised of entire words and sentences, spoken between the adults around you; in other words, before you are exposed to discrete individual phonemes, you are exposed to the larger units of language that those phonemes combine to create. Yet the first units of language that an infant actually produces are most likely individual phonemes and morphemes – an infant produces these accidentally, while babbling, and later learns to combine them in meaningful ways. And while these are the first units of language that an infant will actually produce, they are most likely not the first units that he/she will understand. Most children raised in an English-speaking environment learn to use the word “mama” (or some derivation thereof) before they learn that the phoneme “un” usually implies negation; most children raised in a Spanish-speaking environment will understand [mi’xita] (or [mi’hita]) (“mi hijita” [sometimes written as “mijita”] or roughly “my little daughter”) before understanding that [ita] (or [ito]) is a diminutive suffix used to indicate smallness, cuteness, or affection. . . .
 
                It’s also interesting to examine this a-linear acquisition of language in the context of foreign language learning, and how this differs from the process by which an individual learns his/her native language. When you learn a second language as an adult (or as an older child), you’ve already learnt to speak – that is, to produce organized combinations of sound that, together, convey meaning. So you don’t usually begin by pronouncing the individual phonemes and morphemes that exist within a language. Instead, you often start with some common phrases. For example, if learning Spanish as a second language, you might learn to say “Tengo treinta años” ([teɪŋgo tʀeɪnta aɲos]) before you learn that “tengo” means “I have” and “años” means years – and before you learn to pronounce [ɲ] as an isolated phoneme. After learning some phrases you go back and learn individual words, and maybe as you learn grammar you then break those words into morphemes in order to better understand their meanings. And maybe at some point you work on pronouncing the individual phonemes that exist in your second language and not in your native one. (All of this is a very individualized process though; clearly this is just one of many possible orders in which one might go about learning a second (or third or fourth) language.)
                So, acquisition of a foreign language seems to follow just as winding and a-linear a path as acquisition of one’s native language. Yet it isn’t the same winding path. It begins in a very different place than does the acquisition of one’s native language. Rather than first learning to produce phonemes and then learning to combine them in meaningful ways, when learning a foreign language one begins by learning to understand and pronounce (albeit badly) much larger units of meaning before then breaking them down into their component parts. And I’m not necessarily saying that this difference is a bad thing. Learning a language at birth and learning a language later in life are fundamentally different – you begin each with different knowledge and a different skill set, and so the process must necessarily be different in order to accommodate that. 
                But I think it’s interesting to look at the differences between the two acquisition processes (and to look at each process in isolation as well) in conjunction with the idea of language as comprised of a series of layers of ‘building blocks’ (phonemes, morphemes, words, sentences, etc.) which combine to create meaning.
. . .
It appears (at least to me) that learning a language is not a linear process wherein one begins with the smallest distinct units of sound (phonemes) and then combines them into bigger and bigger units until eventually arriving at flowing speech. Rather, it’s a process wherein all the different components of language are learned and relearned continually and concurrently, starting in infancy and continuing until death.

Saturday, September 27, 2014

Sophia Jung: Mutual Intelligibility - How Do People Understand Each Other?

Sophia Jung: Mutual Intelligibility - How Do People Understand Each Other?

To this day, linguists continue to develop and modify methods of accurately transcribing phonetics. Articulatory phonetics, the production of speech and sounds, states that a good phonetic transcription should be “consistent and unambiguous” (40). However, speech is not standardized, nor is it visual. Not everyone speaks as is written, no matter how consistent and unambiguous a phonetic transcription may be. In fact, when words are put together, people often pronounce the neighboring words differently.

That got me thinking: how do people understand each other when they speak with different accents? For example, if I pronounce the word tomato as [təˈmeɪtoʊ], but my friend pronounces the word tomato as [təˈmaːtəʊ], how do I know my friend is saying tomato, when I don’t pronounce the word the same way my friend does?

Mutual intelligibility states that within the same language, people can understand each other despite the pronunciation, vocabulary, and grammar differences (410). In fact, “every speaker speaks his own idiolect, because no two speakers of a language or a dialect speak in exactly the same way” (411). But how does mutual intelligibility come about? Sure, I may be able to intuitively deduce that [təˈmaːtəʊ] is tomato. However, when it comes to longer phrases, especially those of different dialects, comprehension becomes more difficult. Therefore, I believe that, even within the same language, a person has to be familiar with the different pronunciation of words before he or she is able to hear and properly analyze utterances of different accents.

I am a native Korean speaker, and I moved to the United States when I was nine years old. I grew up in the Bay Area, so although I learned English pretty quickly, I never forgot Korean because there were a lot of Korean-Americans in the community. I never had trouble understanding the Korean-American adults, whether they spoke perfect English or broken English, because I grew up hearing my parents speak English with a Korean accent. For example, my dad can’t pronounce [f], [z], and [l] sounds. In addition, some Korean vowels differ from English vowels, so for many years, my dad couldn’t pronounce (nor hear) the differences between ‘leave’ [liːv] and ‘live’ [lɪv].

If I hear ‘live’ being pronounced as [liːv] for the first time, I would be confused. But when I hear it a second time, I would remember the last time I heard ‘live’ being pronounced that way, and I would be able to process the message faster. Soon, I would get used to it. Simply put, I would be able to understand someone speaking in a different accent or dialect because I’ve heard it before.

Nonetheless, after spending a year at Stanford, I realized not only had I forgotten a lot of Korean, I’ve also gotten rusty at understanding English spoken with a Korean accent. For example, I have to focus more than before in order to correctly understand Korean adults speaking Korean-accented English. More often than not, when I am not paying enough attention, I completely miss what they say and I have to apologetically ask them to repeat themselves.

The textbook does mention that “although the principle of mutual intelligibility is useful in theory, from a practical standpoint,” it all depends on how native speakers perceive the language and the different dialects (410). Has my perception of Korean-accented English changed? Can people forget how to understand different pronunciations, just like how people can forget a language if unused?

[Word Count: 571]

Wednesday, September 24, 2014

Tom Cao: A little more on descriptive vs. prescriptive grammar

In the first lecture we had some very interesting discussion about descriptive vs. prescriptive grammar, which is wonderful, because sooner or later, when you tell a friend that you are taking (or have taken) a linguistics course, chances are you'll be bombarded with a bunch of issues concerning split infinitives, or propositions at end. Those topics are positively prescriptive, yet unfortunately they remain to be what most people think about when they think about linguistics.

How do we convince our friends that prescriptive grammar irrelevant to linguists? Well, usually you just tell them, “I don’t care, whatever.” And you’ll appear cool. But we ourselves should understand the rationale behind the issue. Linguistics is a science, and scientists strive to describe and explain phenomena in the world as accurately as possible. If an advocate of Aristotelian mechanics points his middle finger at two stones of different masses and shouts, “how dare you guys not obey the physical law!” everyone else will regard him as just insane.

But linguistics is never as simple as physics. Unlike stones, human beings are notoriously susceptible to outside influence. Two quick examples: 1) The Economist’s Style Guide offers the following comment on split infinitives, “the ban is pointless. Unfortunately, to see it broken is so annoying to so many people that you should observe it,” thus rendering itself pointless as well; 2) even those who stubbornly insist that “data” should always be plural would find the collocation “few data” somewhat unidiomatic (or, to put it better, doge-y), which is probably why even the New York Times came up with this sentence with self-contradictory grammar, “very little data have been gathered about the behavior of scientists themselves.”

There’s also the issue of hypercorrection. When children are incessantly told to say “it is I” rather than “it’s me,” or “he’s taller than I” instead of “he’s taller than me,” they tend to regard “I” as a more “correct” form than “me,” hence the widespread usage of “between you and I.” The same holds true for “whom,” which is eerily perceived to be associated with speakers of higher education no matter whether it is used “correctly.” A survey even indicates statistically that, for males, just randomly spraying “whom” in your speech can be a great way to attract ladies. Wow! No, I meant, whoooom!

Another thought-provoking perspective comes from academia. People in one common discipline may communicate using technical terms, and these terms need generally approved definitions. So a generative linguist’s sense of “grammar” may be equivalent to our instinctive language faculty rather than some thick book full of perspective rules that lies quietly, gathering dust. But these terms can also be part of the common lexical repertoire of English speakers, and this double identity can sometimes cause problems. A few months ago, several scientists published an article in Nature titled A comprehensive overview of chemical-free consumer products, and there wasn’t a single word in the main text. For chemists, “chemical” is almost synonymous to substance (with the exception of some sub-atomic particles),  but for the rest of us, the term most certainly connotes, if not denotes, something artificial or even poisonous. Is this also prescriptivism? Apparently it’s a line too fine to draw.

Finally, several years ago, I used the phrase “encouraging news” in an essay for my English class (taught in China). My teacher said the usage was “wrong” because the Oxford Advanced Learner's Dictionary says “encouraging” is “not usually [used] before noun,” and teachers of English in China usually interpret “usually” as “always”. Why is the dictionary saying that? (For non-native speakers, go check Merriam-Webster.) During a lecture in Manchester, UK, I asked an editor of Oxford Dictionaries. It turned out that they had myriad illustrative examples in their corpus (i.e. database), so “even if it’s only 1%, it’s still a lot.” The moral of the story? Even the most descriptive approach can occasionally lead to prescriptive results!

Griffin Dietz: Gestures in Vocabulary

When a young child steps back, throws up her hands and—perhaps unconsciously—lets out a declaration of disassociation in response to dropping a vase, we question where she learned this method of using words to separate herself from blame. Languages, according to the Language Files textbook, fall into one of two modalities: auditory-vocal or visual-gestural (24). This description of language types leaves little room for crossover, meaning a language can be classified into only one of the two categories. When we think of language, most people immediately associate it with speech, which explains why our discussion focused so heavily on the implicitly learned nature of the young girl’s verbal response. Her word choice created a disconnect between her and the broken vase, a verbal device she picked up through observation as opposed to explicit teaching.
Now reimagine the situation: without uttering a word a young girl drops a vase and jumps back, throwing her hands up and away from the mess. Does she not essentially convey the same message? Another person would look down and see a broken vase, and look up to see the girl’s body and hands far removed from the shards on the floor. Now that’s not to say the observer couldn’t put together cause and effect, but the added spoken disassociation would not prevent that association either. The girl’s reaction in this scenario, too, was never explicitly taught. We learn our gestures through observation in much the same way as spoken language. People often adopt movements or facial expressions of family and friends in much the same way one would take on their manner of speech.
Returning to the second scenario I begin to question how separate the two established language modalities truly are. While I grant that most gestures augment or enhance conversation, as in the original situation with the young girl and the vase, there are certainly cases when movement may replace spoken words. There is by no means a dictionary or codex detailing the meanings behind gestures in our society, and yet we often understand certain motions to mean certain things as well as if words were spoken to us. A wave of the hand can acknowledges someone else’s presence, a shrug indicates a lack of knowledge or unwillingness to help, and one may hold up their fingers to replace saying numbers aloud. And unlike in learning a separate second language in which the learner at first likely mentally translates everything he or she hears into his or her native tongue, an observer of gestures can understand their meanings without any mental translation.
Keeping this in mind, I have come to believe the language with which we interact with one another is a mix of both auditory-vocal and visual-gestural, meaning that while most communication is made with speech, there are certain expressive gestures that are also elements of our spoken language’s vocabulary.


To inspire discussion: Can you think of expressions or gestures you may have picked up through observation from the people around you? What about an occasion when you’ve communicated without using spoken words?

Linguistics in Every Day Life


I don't think we realize the extent of linguistic's application in everyday life. For example:


1. In the card we received, nasal sounds were associated with m and n noises. Assuming that breathing relieves pain, is it possible that we make m and n noises when we are feeling pain to cause more oxygen into our brain to relieve pain or is this too far-fetched of a theory?

2. Also, as I brought up in class, we are asked by doctors to "ahh" because the sounds causes the tongue to stay at the bottom of the mouth, allowing the doctors to see into the back of our mouths and to obtain samples from there.

3. Doctors could also use it in conjunction with speech therapists to look at how patients with brain injuries such as post-stroke patients have a different type of speech. For example, a person with a right-brain stroke would have a hard time using the muscles on the left side of the tongue. Knowledge of linguistics can help someone like this learn how to speak again.

Are there any other examples that anyone can think of?

Tuesday, September 23, 2014

Virgil Zanders: The Broken Vase and the Seventh Element of Language

The six elements that comprise our linguistic competence help to explain how children learn and communicate language. Equipped only with their instincts and the desire to understand and be understood, infants endeavor daily to comprehend phonetics, phonology, morphology, syntax, semantics and pragmatics.   To my mind these elements explained neatly linguistic competence until Professor Sumner shared with us the cute little story of her daughter and the broken vase.  The story was simple enough.  One day her daughter accidentally dropped a vase which shattered into myriad pieces making a thunderous crash!  When Professor Sumner looked her daughter’s way to observe the wreckage her daughter, who is pre-school aged, threw up her hands in disbelief and exclaimed, “the vase broke!”  This was a clear example of a young child’s mastery of the six elements.  With no formal prescriptive grammar training, she used language in a fairly sophisticated way to deflect blame.  Prescriptive grammar, however, is derived from the deliberate time and study of the way one can use language to achieve an outcome or make a specific impression.  Before one can arrive at the knowledge of what kind of an impression a sentence can have on the listener, the communicator must have a strong suspicion that the listener will share her belief of the likelihood of that impression.  Put another way, the young Miss Sumner had to believe beforehand that her mom would find it plausible that the vase broke on its own without any additional assistance.  All that would be needed was a compelling follow up sentence to cement that belief that, “the vase broke!”


Since children learn the elements of linguistic competence through instinct and observation I can’t help but wonder, how?  How did the young Miss Sumner come to believe that it was plausible that a vase breaking by her hand could actually be viewed as, well, not breaking by her hand?  Where did she learn that her mom might actually believe, “the vase broke!” and it wasn't caused by her?  To answer this question of how, I’d like to set forth the possibility of the existence of a seventh element of language; one that children learn by instinct and later manipulate through prescriptive learning.  This seventh element I will call, the “normative case.”


Children learn language through observation and mimicry.  Day after day for hours their adorable little minds are fixated on learning the cause and effect that utterances, when used properly, can produce.  Crying might get you fed or it might get your diaper changed.  Saying “eat” however should definitely get you fed.  Pointing to the cookie jar atop the refrigerator might get you a cookie or it might get you a closer look at the refrigerator magnet holding up a picture of uncle Ben at the family picnic.  Pointing at the cookie jar and saying “coo-key” should definitely increase the odds of pulling down a cookie.  Children also observe normative phenomena, i.e. things that ought to happen, that initially have nothing to do with language but everything to do with cognition and the way they perceive the world.  These perceptions ultimately find their way into language. A child seated in a high chair who throws her sippy cup to the floor and never sees it break will one day come to believe that sippy cups do not break.  Plastic bowls filled with food that hit the floor also do not break.  Neither do pacifiers, stuffed animals and countless other child proof objects that parents equip them with.  Eventually, most children will instinctively come to believe a normative case: if I drop something, it ought not break. Or to put in child speak, "the sippy cup and anything else I let fall to the ground won't break!" I believe we all have experienced normative cases such as these as children.  Ultimately, however, we must unlearn them as we become older and recognize the truth.  But the reactionary instincts created by the normative case never fully leaves us.  And in unexpected circumstances, these reactionary instincts can cause children and even adults to say things that seem silly.  My best friend’s dad, for example, while in a hurry to get to work, once backed his car into the garage door when it was pointed toward the street while sitting in the driveway.  His excuse, “the car just went into reverse!”  

I believe the normative case should be given a least a passing thought as a possible seventh element of language.  Certain things we learned as children that influenced our instincts on what ought to happen and subsequently impact our words are more pervasive in our language than we care to realize.  The normative case can be ascribed to deflecting blame, fibbing, and a host of other utterances that even as adults we find ourselves saying out of instinct.