Registrate y consigue 20% de descuento en comsiones

Speaking Silently: the Neuroscience of Sign Language

Indeed, languages incorporate the knowledge and the history of a culture, while the function of cognition is to translate and represent this wisdom (“internal world”) toward the outside (“external world”) to interact with the environment and life events.

"Change your language and you change your thoughts." (Karl Albrecht)

Unlike animals, humans come to a life empowered with a remarkable ability: speech. However, in contrast to the common misunderstanding in equating speech with language, since the 1960s, several studies in cognitive neuroscience demonstrate that sign language is independent, and complexly organized around linguistic levels of meaning and grammar. How does sign language differ from spoken language? And do these languages share common features in the brain? How does the brain process sign or spoken language?

A shared brain region to process sign and spoken language

Recently published in the journal Human Brain Mapping [1], researchers at the Max Planck Institute for Human Cognitive and Brain Sciences (MPI CBS) in Leipzig (Germany) investigated which areas of the brain are responsible for processing sign language and how large the overlap across brain regions is when compared to hearing individuals while processing spoken language. Hence, in a meta-study, they pooled data from experiments conducted across the world on sign language processing.“A meta-study allows us to get an overall picture of the neural basis of sign language. So, for the first time, we were able to statistically and robustly identify the brain regions that were involved in sign language processing across all studies,” says Emiliano Zaccarella, group leader in the Department of Neuropsychology.

Moreover, across studies, it was found that Broca’s area, located in the frontal brain of the left hemisphere, was highly involved in processing sign language. Not surprisingly, this area of the brain is already well-known to be active when processing spoken language and is mostly involved in language production, meaning, and grammar. When comparing these findings with previous studies, the observed results from the Leipzig-based research confirmed an overlap in activation between spoken and sign language, precisely in Broca’s brain region. In addition, it was observed an activation over the right frontal region, which would be the opposite site of Broca’s area (left side of the brain). Indeed, a non-spoken language such as sign language would process non-linguistic aspects that are mostly spatial or social information. And what does this mean? Moving hands, the body, or the face - characterizing the form of signs - can be perceived similarly by deaf or hearing individuals. However, only deaf individuals would also activate the language network in the left hemisphere, including Broca’s area. Thus, gestures would be processed and perceived as linguistic and not just simple movements (e.g., this would be the case in the brain of a hearing individual).

Therefore, this research proves that the Broca’s area, located in the left human brain's hemisphere, plays a central role and central station in the language network. Moreover, this region works together with other networks and can process spoken written language and even more abstract forms of language. “The brain is therefore specialized in language per se, not in speaking,” explains Patrick C. Trettenbrein, first author of the published research [1] and doctoral student at the MPI CBS. Nevertheless, the need to discover even more is the core of science. Therefore, the research team aims to explore the different parts of Broca’s area and whether these regions might be specialized in either the meaning or the grammar underlying sign language and processed by deaf individuals, similar to what happens in hearing individuals.

What is sign language?

Lots of studies on the topic should be addressed to Stokoe (1960), who first supported the idea that signed language is like any other. Along with his colleagues, they designed the dictionary of the American sign language (ASL) (Stokoe, Casterline, & Croneberg, 1965) in which arbitrary shapes of hands, palm orientations, locations, movements, spatial relations, and orders between signs would define what is a signed language; a language that satisfy several linguistic characteristics and precise criteria (Lillo-Martin,1997; Siple, 1997), from the phonology, morphology, syntax, semantics to the pragmatics. Although ASL is one of the most studied signed languages, others have also been further investigated and involved in a few studies in cognitive neuroscience. Learning and acquiring ALS structure in terms of handshapes would match the same difficulty that hearing individuals encounter when acquiring phonemes. It was found that ALS syntax would follow similar principles of spoken grammar, besides a structured spatial grammar (Lillo-Martin, 1997). Therefore, the acquisition of ALS requires the same sequence as it would happen for any spoken language and with a similar proposition (Studdert-Kennedy, 1983) [2].

FameLab YouTube Channel
Verge Science YouTube Channel

The neural basis of sign language

The majority of our knowledge regarding the neural basis of linguistic communication relates to the investigation of those mechanisms underlying spoken language. However, human language is not restricted to the oral-aural modality. Hence, the exploration of the neural processes of sign language would enhance our learning of what we know today about language and which characteristics generally belong to this cognitive domain or are exclusive to a language that is spoken and heard. Neuroimaging studies on patients with brain lesions show that sign and spoken language share similar neural pathways, and therefore both involve the activation of the left-lateralized perisylvian network. In addition, studies demonstrate how these languages process information is different, and thus how communication and language mechanisms function in hearing and deaf individuals based on modality-specific patterns, as well as, modality-specific language impairments, fostering the understanding regarding the neural basis of cross-linguistic differences [3,4].

NYUAD Institute YouTube Channel
Lauren Spinelli YouTube Channel

See full article By @OpenEXO

No hay comentarios.

Imágenes del tema de enot-poloskun. Con tecnología de Blogger.