NERIE-NCERT| ISL

Brief Decription of Sign Language Grammar

Assumptions about Sign Language

When comparing sign language and spoken language, the common assumption is that sign language is a type of communication system that employs hand movements and facial expressions, and therefore cannot possibly have a grammar or even convey meaningful conversation; it is simply an alternative communication system invented by the hearing people to assist d/Deaf people to learn and speak a spoken language. The term ‘language’ is usually understood to be a communication system which uses sound to communicate, contains grammar, and can be represented in written form; any deviant form is unacceptable. Hence, Sign language is often understood as an invented form of gestures. As discussed earlier, several research studies (Stokoe and Marschark 1999, Petitto 1994, et.al) have shown that gestures do not constitute sign language. They are considered as actions and demonstrations whereby a person literally enacts an entire sentence. When signing, a signer can simply sign an entire sentence within seconds and though some gestures may be used while signing (Emmorey, 1999) they are generally not considered to fall under the scope of sign language.


People often look at sign language as a representation of a spoken language. Sign language is not a simple word-to-sign translation and signs are not made by following each and every word from the spoken language. ISL is not dependent on spoken languages (Hindi or even English). The means through which sign sentences are formed are therefore independent of spoken language. For example, the word order of English is SVO where as the word order of ISL is SOV. So, in English, the sentence will be “Man kicks dog”, whereas in ISL the same sentence will be structured as [MAN DOG KICK ]. Hence, sign language does not encode a spoken language, and it is structurally independent of spoken languages.

Hearing individuals have a tendency to look for equivalents of sign from their own language. Often people or learners of sign language would say that sign language lacks a comprehensive vocabulary. For example, English and Khasi have equivalent words for brother (u Hynmen), wife (ka Lok), son (u Khun), etc. In Shillong Sign Language (ShSL) however, kinship terms are signed using a combination of two signs. The sign for Brother is a combination of MAN+ SIBLING = BROTHER; for wife WOMAN+ MARRY = WIFE; for son MAN+ BABY = SON. Sign language therefore, does not represent the spoken language. It can be observed that one word describes one single in spoken language, while sign language may use two to three words to describe one single entity. Although people may see this process as compensation for the limited vocabulary in sign language, it must be noted that sign language has the same morphological process of compounding to convey meanings as any spoken language. Morphological processes such as inflections and derivations are common in sign language.

Often people think that sign language is a language articulated by the hands only. This is only partially true since sign language involves a combination of the hands, the face and the body. For example, question words in ISL are marked with the combination of the hands with raised eyebrows. For example:

  1. NMF: rb

    ShSL: YOUR NAME WHAT?

    English: What is your name?

  2. Negative sentences are also made with a combination of the hands and a headshake.

  3. NMF: hs

    ShSL: I GO NO

    English: I am not going

Finally, a common misconception is that sign language is one universal language which is spoken by all the deaf people in the world. Every country in this world has its own sign language with its own vocabulary which is different from each other and is not mutually intelligible. Dialectical variations within a sign language also exist at many levels from one region to another. For example, just as Hindi is different from English, ISL is also different from BSL. Furthermore, just as Hindi has various dialects in different regions, the sign language of Mumbai is different from the sign language of southern India or the sign language of Delhi. In this context the common or standard variety of sign language is the ISL.


Linguistics and Sign language

Comprehensive linguistic research work on Sign language began with William Stokoe in the 1960’s. Stokoe was an English Professor appointed to teach English to deaf students at Gallaudet University. Since he had been formally taught sign language, he was able to realise that the language his students used amongst themselves had a different linguistic pattern from what he had learnt. He was intrigued by Chomsky’s presentation on Syntactic structures (1957) which proposed that the fundamental principles in the structure of languages are biologically determined in the human mind. Influenced by Chomsky, Stokoe further explored the possibility that sign language too, may naturally emerge from the brain (See Maher, 1996). According to Maher, (1996:63):

Stokoe became increasingly adept at recognising aspects of ASL that no one had paid much attention to before, and he began to realise just how detrimental that lack of attention had been for deaf students. He was losing patience with teachers who spent their time complaining that the college shouldn’t have let students in because they didn’t know how to write an English sentence and because their vocabulary was lacking and because their grammar was nowhere.

His pioneering work, Sign language Structure: An outline of the Visual Communication System of the American Deaf reveals that sign language operates in ways similar to spoken language. Like spoken language, Sign Language also has linguistic rules (phonological rules) for the way in which each movement, each shape, and each movement of the hand/s can combine with one another to convey meaningful utterances visually . His further works on ASL, especially his Dictionary of American Sign Language (1965) ) provided more evidence that sign language can be seen as a genuine language which fulfils the same functions as spoken languages. It displays similar characteristic features and is subjected to the same principles and constraints of Universal grammar. Sign languages since then have been recognized as natural human languages.

Subsequently, the field of Sign Linguistics emerged in which several researches focused on the influence of modality on language structure (Meier, et.al 2002). The area of phonology saw its most important development with Stokoe’s first ever linguistic description of sign language (Stokoe 1960; Battison 1974; Padden&Perlmutter 1978; Brentari 1998; Liddell 1980, 2003) which sparked a whole new perspective on how one sees and thinks of human languages.

The emergence of the concept of language in visual modality as propounded by Stokoe triggered further linguistic research on the differences between spoken and sign language, and on empowering the linguistic status of sign language. Research studies circle around the question of whether gestures of deaf people constitute a language or not—if it is so, then what are the formational properties of this language and how different is it from spoken language? Therefore these studies try to determine whether sign language is only a pantomime based mostly on iconicity or a natural language equivalent to spoken language (Kendon, 1985, 1988; Petitto 1994; Stokoe&Marschark 1999; Mc. Neill 1999). Neurologists also began to study several aspects of sign language which relate to the cognitive areas of language function and use (Schlesinger 1978; Klima&Bellugi 1979; Hanke 1995; Emmorey 1999; Ekman 1999; Mc Neill 1999 and several others). Neurological studies have shown that regard of the differences in modality between spoken and sign language, language processing in the brain takes place in the same manner whether a person is deaf or hearing.

Role of gestures and iconicity

Research studies claim that sign language has both linguistic and gestural features (Liddell 2003, et.al). In the studies of spoken languages, gestures have always been regarded as an important feature in the production and reception of languages, and they are generally seen as a common non-verbal communication of the hearing people. According to Johnston (1989:26)- “Gestures are movements of the body and especially of the hands and arms which express an idea, emotion or attitude. Mime is a way of using gestures and bodily movement, without speech, sign or sound, to act out something. One literally goes through the motions of doing something without actually doing it, and without the objects or tools necessary to perform the action actually being present”. Sign language has always been misperceived as a combination of gestures rather than a full-fledged language. Mime is a way of using gestures and bodily movement, without speech, sign or sound, to act out something.

Gestures are also body movements which are closely tied to a specific culture and community. For example, greeting gestures are different from one language community to another around the world. Gestures are the informative acts of human behaviours inseparable from the common daily interactions of the people. They are culturally intertwined and interwoven in every mode of language use. Of course, there are some gestures commonly used around the world. One of the universal gestures according to Axtell (qtd. in Kendon, 2002) is a ‘smile’, which is the ‘ultimate gesture’. Some researchers have regarded that gestures are the first manifestation of a language and that they can be translated through different media which may be iconic, symbolic and indexical.

Deaf children of hearing parents with no exposure to sign language spontaneously develop a visual-gestural system of communication. Various studies on spoken languages have also confirmed that even in the pre-linguistic stage, children begin with gestural communication. Deaf children develop a gestural system of communication which is similar to the gestural system of hearing children before they learn to speak or sign. Family members of the deaf children also use gestures for the purpose of communication, and these are mostly directional and demonstrative gestures commonly known as ‘homesigns’. Homesigns are the initial system of communication used by deaf children before joining school. When we observe deaf people sign, we usually see the hand movements of different shapes and sizes and the facial expressions, and these appear as common ordinary gestural expressions of people who cannot speak. Generally these are behavioural acts of communication which lead to the misassumption that sign language is of the same nature.

Gestures are therefore human actions that usually accompany both speech and sign to express an idea or meaning that is culturally specific to a particular language group. In spoken languages, gestures are clearly visible and distinct from words. For instance, hearing people who are expressing words like ‘silence’, ‘quiet’, ‘yes’, ‘no’, etc. may be relatively unaware that they are also making faces, moving their heads or hands along with what they are saying. Although sign languages use gestures as well, most of the signs are iconic i.e. their physical appearance resembles objects, or actions denoted by them. Icons are symbols which share a physical similarity or an exact resemblance to objects they represent. Many new signs are created through an iconic association of meanings which are represented through mime and gestures. These new signs eventually developed into highly complex signs.

Cohen et.al points out that “to indicate an action the signer generally performs an abbreviated imitation of a characteristic part of it. For example the word WRITE is signed by making such an imitative movement. Another example EAT, is signed by a movement as if bringing food to the mouth”. Similarly, objects are also portrayed by the hands, the word TABLE, CUP, etc; and some signs like action verbs SLEEP, RUN, etc are signed by imitative movements and actions. According to Emmorey (1999:155)—“Signers do not produce spontaneous idiosyncratic hand gestures that suddenly become part of sign language. Rather, the facial and body gestures are articulated only to emphasise the meaning which is communicated through sign language (particularly during narratives). These gestures are often iconic but may also be metaphoric”.

Spoken languages undergo a process of grammaticalization, wherein the lexical forms of the language gradually necessitate the development of basic functional elements and later, complex grammatical forms so that communication can become more effective. Similarly, sign languages also employ the same means (Wilcox, 2004; Pfau and Steinbach, 2006) where homesigns and gestures become conventionalized into signs which gradually emerge as independent lexical items. For example, the pronominal systems in ShSL are derived from pointing gestures and these are used systematically within the signing space to indicate a referent and as time indicators.

Iconicity of signs is generally expressed or translated through gestures, in which a signer tries to enact a particular sign according to its referent or according to its concept. But this is not to say that all signs are iconic, because if it is, then sign language would have been easily understood and acquired by any person of any language group. Moreover, there would have been only one universal sign language. Based on the discussion of gestures mentioned above, even gestures are not universal. Its usage pertains to conventions of a particular culture and a particular language group. However, some gestures and iconic signs do merge with the sign language lexicon.

One important feature in sign language due to its visual-gestural modality is that most of the signs are iconic i.e. their physical appearance resembles the objects or actions denoted by them. Icons are symbols which share a physical similarity or an exact resemblance to objects that they represent.

Due to the frequent iconic nature, the meaning of signs is by comparison far more transparent than the words of spoken languages, which is one of the reasons for sign language being much faster to learn than the spoken language. These iconic signs occur in a stylized system and function in a highly conventionalized order. They undergo a process of change till they develop into well-formed signs in actual sign language interaction. Therefore, the grammatical structural patterns of sign language are quite different from the linguistic structure of spoken languages. Meier (2002:15) points out that— “the frequency of iconic signs in signed languages leads to the conclusion that there are in fact two pertinent design requirements on linguistic vocabularies:

I. Languages have vocabularies in which form and meaning are linked by convention.

II. Languages must allow arbitrary symbols; if they do not, they would not readily encode abstract concepts, or indeed any concept that is not imageable. We know, of course, that ASL has many arbitrary signs, including signs such as ‘MOTHER’ or ‘CURIOUS’ or ‘FALSE’” .

Similarly, the relationship of signs in ShSl and the referent they represent is generally arbitrary. Young deaf children appear to use more signs that are mostly iconic. This is due to lack of exposure to any linguistic model of sign language in their homes. The teachers of the deaf children use visuals to relate meanings and concepts to English language . When children are given a lot of visuals and it has been observed that they try to gesture the concepts in their own system of communication. It is interesting to observe that when the children saw the picture of ‘SUPERMAN’, they immediately enacted it by extending their right fist shape hand upwards and placed their left hand on their waist, with eyes gazing to the sky. When one of the students saw a picture of an ‘apple tree’ , she immediately gestured by shaking the tree so that the apples will fall on the ground and then she would be able to eat them.

Deaf children are capable of creating a language of their own, which is much more systematic than the gestures of their hearing parents -the Deaf use a system of communication that fulfills the same intellectual expressive and social functions as spoken languages do; but instead of being based on voice and perceived by the ear, their system is based on the signals produced by the hands and perceived by the eyes (Klima andBellugi, 1979).

Phonology

The design features of sign language offer quite an interesting insight as to how every language consists of a finite set of discrete elements which can be combined to form words. Even if the modality is different yet sign language consists of the same. Stokoe’s (1960) in linguistic description of ASL, uses the term ‘Cherology’ in sign language which is equivalent to the term phonology in spoken language. ‘Cheremes’ according to Stokoe (1965) are the basic linguistic unit in sign language. He found signs in ASL to comprise of linguistic features –these are location (the tabula or tab) handshapes (designators or dez ) and movement (signation or sig). Within one feature, there is also a subset of features, and each feature is derived on the basis of the occurrence of minimal pairs in the language. An example of minimal pairs in ShSL is presented in Fig2.3.a.

Further research (Battison, 1974 & 1978; Kyle and Woll, 1980 &1985 and several others) in sign language phonology have led to emergence of two more linguistic features known as Orientation and the non-manual features Therefore, each sign segment is a combination of mainly five phonological segments. There are mainly five features in the sub-lexical structure namely, Handshapes (equivalent to phonemes in spoken language); Location (equivalent to place of articulation of phonemes); Movement (how handshapes moved); Orientation (what a handshape is facing) and the non-manual features (eye gaze, brow raised, etc). These five phonological features in sign languages, along with syntax, morphology and meaning are combined to form a sentence while communicating in sign.

Handshape (HS):

Handshape (HS) Is the basic fundamental structure of signs. Stokoe (1960) uses the termChereme to describe these handshapes which is analogous to the term phoneme in spoken language. Handshapes may differ from one sign language to another. It has been found that handshapes ranging from the simplest to the most complex in the world are also present in ISL and in ShSL. However, the core or major handshapes in ShSL, are /A/, /G/, /B/, /5/, /O/, /C/, /L/, and /V/. These handshapes function as phonemes in sign language; for example; the difference between COFFEE and TEA is only with the handshape. Both signs have the same location, movement and orientation but the handshape for COFFEE is /C/ and the handshape for TEA is /fO/. This single contrastive unit of phoneme /C/ and /fO/ signals a difference in meaning between the two words as shown in Fig 2.3.1a

Location (LOC):

The second segment in the phonological structure is Location. Typically, there are two types of location, namely, body location and neutral space (NS). Location refers to the ‘body location’ (as shown in Fig) or the in which a sign occurs. Neutral space is the location just in front of the signer. Signs are articulated either in contact with different parts of the body from the head down to the waist (body location) or in the neutral space. When a sign occurs in the neutral space there is no contact with the body nor does it have any significant proximity to it. Locations of different signs are phonemic. This means that when two signs share the same features, a difference in the body location of the sign will signal a difference in meaning between the two signs. Signers use the body location and the neutral space as points of reference for a particular meaning (see section on signing space).

Movement (MOV)

Movement (MOV) is the third segment. Movement of the signs may be on a large scale (direction of the movement) or on a small scale (manner of movement). There are several other factors such as the shape, dynamics, and size that accompany a single sign (see Sinha, 2012). For example, we can tell the difference between the signs TABLE and BENCH (Fig 2.3. 3a) by how big is the path movement. There are also signs known as movement-Interaction. Such signs may be created through a combination of handshapes touching each other in various ways, for example, the signs for MUMBAI, MONTH, etc. These are signs when both hands approach each other, make contact and then separate from each other. s

Signs in sign language are organised into syllables. Movement corresponds to the nucleus of the syllable, like the vowels of spoken language syllables. Stokoe’s (1960) research on ASL reveals sign occurs in a simultaneous order. However, there are researches that focus on the sequential order of signs. Movement is the most important phonological segment in sign language. According to Liddell and Johnson (1989) just as spoken language syllables consist of various combinations of consonants and vowels segments, Signs also consist of sequences of movement and hold segments. Their basic claim is that the structure of signs in the Movement –Hold model (Movement-Hold segments) is equivalent to the phoneme segments of spoken language. PLEASE in AUSLAN begins with a ‘Hold’ at neutral space, moves slightly down and ends with a second ‘hold’ in the neutral space(Johnston and Schembri, 2007) Similarly in ISL and ShSL the sign PLEASE begins a ‘Hold’ below the Chin (LOC) which moves down and ends with a second ‘hold’ in the neutral space.

According to Brentari (1998) however, movement is the central formational category. In spoken languages, consonant segments can be specified as coronal or labial; in sign, the location segments can be specified as being, for example, at the forehead, chin, shoulders, or chest. How a sign language organizes these phonological segments into hierarchical feature geometry is captured in one theory of ASL phonology: the “prosodic model”.

Orientation

Orientation is the fourth segment of a sign and it refers to the direction which both the inner palm surfaces of the hands are facing or pointing the signer or towards the addressee. The section on the phonological inventory provides the specification of this feature. Similarly in ShSL, signs that contact the body need not be specified for the hand orientation. For example, the sign I in Fig 2. 3.4a is signed with G HS in contact with the body. Here, the inner palm surface of the hand having the G HS is facing the signer once it comes into contact with body location.

Non-Manual Features/Activity (NMA)

Non-Manual Features/Activity (NMA) are the fifth phonological segment. They have several functions which are obligatory, for example, question words, negation, expression of several emotions, and so on and so forth. They are also imperative in making a sign grammatical; if not, the sign will be meaningless. The occurrence of NMA features in a manual sign makes one sign different from another. The sign for negation in ShSL, is often accompanied with a headshake or simply a headshake to signal negation. Similarly, the sign for YES is made whereby a simple nodding of the head is used. The facial expression in ShSL occurs more frequently with emotions, for instance the signs HAPPY, SAD, EXCITED, WORRY, etc.

The NMA Eye-gaze fixation that occurs in the neutral space is used for pronominal referents in ShSL, and this feature is similar to ISL. For example, the pronoun SHE (3rd per SNG) is signed by pointing to a particular point with an eye-gaze that is focused on that particular point in space. The point in space can be to the right or the left side of the signer. The sign YOU (2nd per SNG) is located usually in front of the signer accompanied with the Eye gaze. It also takes the role of different referents in the signing space. Eye-gaze shifts along with the referents. Eye-gaze is also a marker for time, direction, etc.

Modality and Signing Space

Languages in general are much easier to learn and use than to describe or explain. All sign languages around the world do share a common grammatical structure. A review of the research studies in sign language have shown that the basic difference between sign language and spoken language is in the ‘modality’ of how the language is conveyed. The hands and facial expression are the means through which meaning is transmitted whereas the vocal apparatus is for spoken language. Modality refers to the articulatory apparatus used to transfer and receive language. Meier and Quinto-Pozos (2002) provide a detailed account of the effects of modality on the structure of sign and spoken languages. In spite of modality differences, sign and spoken languages share common properties with one exception— the articulators. Meir (2002: p 7) elaborates this differences as follows in Table 3.1a

Table 3.a
Sign Speech
Light source external to signer Sound source internal to speaker
Sign articulation not coupled (or loosely coupled) to respiration Oral articulation tightly coupled to respiration
Sign articulators moves in a transparent space Oral articulators largely hidden
Sign articulators relatively massive Oral articulators relatively small
Sign articulators paired Oral articulators not paired
No predominant oscillator Mandible is predominant oscillator

This major difference between sign and speech obviously has major linguistics consequences on the grammar and structure of the language. The major question one may ask is how is it possible for a visual-gestural language to have a grammar, especially for a language which has not developed a codified system (of writing) or a written script. Yet, sign language research in the last 44 years have shown to consist of systematic structural properties that fulfil the same functions as that of spoken languages and at the same time reveal the existence of structural variations within sign languages. There is a rich linguistic diversity in sign languages operating within India alone.

To understand the linguistic structure of sign language, we have to begin by finding parallels between it and the units of spoken language. Since sign language is visual and produced by the human body in space, we have to look for grammar within this ‘signing space’. Spatial modulations are typical building blocks in the grammars of all sign languages studied till date (Supalla, 1995). ‘Signing space’ starts from just below the waist upto just above the signer’s head in a square shape (box shape/3dimensional Fig 3.1a).

There are different types of signs. Some signs contact places on the body, for example, ShSL signs for kinship terms. There are signs that move from one location to another, like the signs that move from contact places on the body location to the neutral space, for example, the sign LIKE. There are also signs that move to the non-dominant hand (for example, Right hand moves to the left hand for a right-handed signer) for example, the sign HELP (Fig 3.1b). Here the non-dominant hand is in the ‘neutral space’. Most double-handed signs take place in the neutral space, for example, the signs. Alternating signs involved the use of two hands examples (hence obligatory) for example, the sign MILK (Fig 3.1c). Spatial distinctions are crucial in understanding the grammar of sign languages. The sign TABLE (Fig 3.1d) occurs in the horizontal plane and the sign MORNING (Fig 3.1e) in the vertical plane.

The ‘neutral space’ is smaller than the signing space. Johnston, (1989:104) defines NS as “signs (where) there is no contact with or significant proximity to a body part”. Petitto (1986) points out that “sign language acquisition of certain key aspects of signed languages depends crucially on intact spatial abilities. Different cognitive abilities are implicated in the processing of spatial descriptions in signed languages. For example, there are stages through which a child acquires ASL moves before the deictic and pronominal systems are worked out and stabilized” (quoted in Johnston, 1989:137).

Sign language uses the signing space to talk about the location of objects and their movements, for example, classifier HS indicates whether a referent is a small animal or a tree, a human, an airplane, a vehicle, a flat object, or a thin object (Emmorrey, 2002). A classifier can give information, for example, whether the book is on top of the table or below the table. Most sign languages (for instance, ASL, AUSLAN, etc) have a class of signs known as pointing signs (with a G HS in ISL). Signs which are deictic or indexing are crucial for almost all sign languages. According to Deuchar (1984), deictic signs are possible candidates for grammaticalization in British sign language (BSL).

Pointing signs are used for locatives (HERE/THERE), Pronouns (I, YOU, HE, SHE, IT, ETC); demonstratives (THIS/THAT), for locating things in the signing space and for ‘naming’ body parts (Johnston, 1989). Pointing cannot be assumed to simply mean transparent gestures. It is a culturally specific phenomenon. Johnston (1989:137), further points out that “in some cultures pointing can be achieved with the lips alone, or the lips and gaze together (Australian aboriginal cultures) and in others, pointing is regularly made with the whole flat hand, not the single extended finger”. Pluralisation can also be achieved through the use of signing space- for example, by positioning more than one location one can pluralise the referent such as GIRLS, in AUSLAN (Johnston, 1989) and also in ShSL. The process of reduplication is generally one of the ways in which pluralisation in IPSL, ISL, are marked i.e. through simple triplication of the sign form (Zeshan, 2000, Sinha, 2003) in the signing space.

“Spatial distinctions are also crucial to the system of verb agreement, whereby a transitive verb “agrees with” the location associated with its direct or indirect object ......alongwith the word order, the spatial modification of verbs is one of the two ways in which sign languages mark the argument structure of verbs. So a sign such as GIVE obligatorily moves toward the spatial location associated with its indirect object” (Meir, 2002:322). Rathmman and Mathur (2002) and Lidell (2000) assert that “the markers of agreement-the loci in space with respect to which agreeing verbs move- are not phonologised. There is not a listable set of locations with which verbs may agree; this is what Rathmman and Mathur call the “infinity problem” (Meier, 2002: 325). They further claim that spatial form of agreement is gestural, not phonological. This implies that verb agreement in sign language has both a linguistic and gestural elements.

Verbs in spoken languages are modified to show person and number by adding suffixes to a word stem, sign languages accomplish this with the use of signing space. Signing space is important in understanding the types of verbs in sign language. According to Sutton and Spence (1999:129-151) there are two types of space: Topographic –which recreates a map of the real world (example DINING TABLE IS IN A MESS) and Syntactic space which is created from within the language and may not map onto the real world (example MY FATHER LOVES MY MOTHER ).

“The morphological use of space in ASL can be seen in verbs. In the sentence first-person-give-to-second-person (“I give you”), the hand moves from the space associated with the first person (the signer) to the space associated with the second person. And in the sentence second person- give-to-first-person (“You give me”), the hand moves in the opposite direction. We see the morphological use of space also in what are known as aspectual markers. For example, we can show that someone is giving continually or over and over again by the use of movement and space” (Valli and Lucas, 1992:77).

From this brief discussion, we can see that sign language operates in space and often the term ‘spatial grammar’ is used in sign language research. Sign language does indeed offer quite an interesting insight into how humans who cannot speak can still develop the ability to communicate in a mode different from the ‘normal’ speech. It provides more evidence that only humans have the capacity for ‘language’, and this language is not entirely dependent on sound.

Word/Sign formation

The discussion on the phonological structure of sign language reveals that one of the properties it shares with spoken language is that it exhibits duality of patterning in which meaningful units (built of meaningless sub-lexical units, whether units of sound or gesture) are combined to form a sign.

Another important feature of human languages is productivity. There are innumerable ways by which humans can produce a number of words or sentences. Each language has a set of rules of how words and sentences can be formed. A native speaker of a particular language instinctively possesses this knowledge in his mind. Another shared property between sign and spoken language is productivity (See Maier, 2002). Sign languages like spoken languages also have the productive means of expanding its vocabulary through morphological processes such as compounding, derivation and the process of morphological modification of basic signs such as reduplication. The components of the sign language lexicon also include vocabulary categories such as iconic signs, initialized signs, indexical and name signs. There are thousands of lexicalized signs in ISL,ShSL, ASL, etc. representing concepts of various levels of abstraction.

Sign family:

The term Sign family has been used by Klima & Bellugi (1979) to describe various types of signs that are related because they share common phonological features and meaning. This is also found in ShSL, for example, the signs THINK, MEMORISE, REMEMBER and UNDERSTAND occur at the temple/side of the forehead. Similarly, ShSL signs that are articulated on the chest often denote feelings like LOVE, LIKE, HAPPY, SAD. Sign families are a common feature across all sign languages ( in Zehsan, 2000).

Compounding:

: In spoken languages, the rules for the formation of compounds differ from one another; the same is with sign languages. Compounds are formed in a combination of Noun + Noun, Noun+ Verb, Verb + Verb, Adjective + Noun, Adjective + Adjectives, etc. Compounding is one of the productive devices in the expansion of sign language lexicon, for example, the sign BEAUTIFUL is formed by compounding FACE and GOOD.

Derivation

Derivation is one of the morphological processes of making new words/units of a language from another word/unit. For example in ASL the sign CHAIR is derived from the verb SIT (Valli and Lucas, 1992). ISL also make used of this process, for example, Nouns derived from adjectives like the sign COLOUR from RED (See Sinha, 2012 for more examples). Similarly, ShSL also expands its vocabulary through this process of derivation, for example, the sign AUTHOR derived from the verb WRITE (WRITE + CLASSIFIER PERSON).

Reduplicationis repetition of part or whole of a word to form a new independent word. “Reduplication stands for repetition of all or a part of lexical items carrying a semantic modification”. (Abbi 1985:12). There are different types of the reduplication phenomena in spoken languages—phonological, morphological and syntactic. Reduplication occurs to express pluralisation, iterative etc. In Khasi language the word ‘kloi’ means hurry, when the word is repeated once for instance ‘kloi-kloi’ means ‘in an abrupt manner’. In sign language, reduplication takes place by repeating the movement of a sign. For example in ShSL to express pluralisation the sign CHILD is reduplicated three times and the resulting sign is CHILDREN (transcribe as +++). According to Fischer (1973), there is one major difference in reduplication between sign language and spoken language. In spoken language the word is repeated twice, where as in sign language the sign is repeated three times.

Role of Fingerspellings

Role of Fingerspellings are the representation of the English alphabets into the hands. They contribute to the process of word formation in sign languages. They are of two types—single-handed fingerspellings of the ASL and the Double-handed fingerspelling of the BSL/ISL. The deaf community in Shillong uses both types of fingerspelling. Fingerspellings are commonly used when there is no sign to describe a particular meaning of a word in English. There are several functions of fingerspellings as pointed out by Sutton-Spence (1998:17) and these functions are found to be similar in ShSL. Fingerspellings:

  1. are often used by the deaf people in communication with the hearing people
  2. are often used by the teachers to introduce new words or new concepts in English
  3. are used in reference to terms in the regional language
  4. are used to denote names of persons and places within the local community
  5. are often used to introduce technical concepts for which there are no sign equivalents
  6. are often used for abbreviations
  7. are used as part of the interpreting task
  8. are used when a signer does not know the sign equivalent that exists throughout the sign community.

Most of the educational terms in the sign languages of the North east region are signed using fingerspellings whereby the initial letter of the term is taken to form a single sign. Researchers on sign language call these signs, Initialised signs. These signs form a part of the sign language lexicon, for example, COFFEE. Most of these initialised signs are lexicalised items and behave exactly ‘like a word’.

Lexical borrowing often takes place through a process of contact and convergence of two or more languages. The resulting words are known as ‘loan signs’. Sign language in the north-east consists of lexical variations when compared to ISL. Variations occur due to its contact with other languages. ShSL uses the BSL and the single handed fingerspellings of the ASL variety. Variations in ShSL also emerge through both direct and indirect contact with the different varieties of ISL. ASL and ISL also have major influences in NESL as used in the states of Assam, Mizoram, and Nagaland. Signed English is a case of contact between sign language and English, which is why Signed English has the same word order structure as English. Deaf people borrow signs from each other within the NE region, for example, the sign NAGALAND is borrowed by deaf students in Meghalaya when they came in contact with students from Nagaland.

Name-signs are a unique cultural feature of the Deaf community and are a common feature of borrowed words from English. Members of the community know each other by name-signs. Name-signs refer to the specific name of a person. Personal sign-names in ShSL are mostly borrowed from English through the use of fingerspellings. A deaf person may have an English name, for example, John who wears reading glasses regularly may have a sign-name such as fs J and the sign GLASSES, or if he is fat he may be given a sign-name fs J and the sign FAT. They may also be given on the basis of one’s daily activities, his career, his skills and his/her involvement with different social groups. Signers use sign names only when the addressee knows the person being referred to in the conversation.

Demographics of Deafness in the North East Region

The North East (NE) region comprises of 8 states, these are Assam, Arunachal Pradesh, Manipur, Meghalaya, Mizoram, Nagaland, Tripura and Sikkim. It is commonly known as the land of the seven sisters with one brother (Sikkim). Sikkim is geographically located near West Bengal; neighbouring Darjeeling but falls under the administrative jurisdiction of the NE region.

There is no uniform data available across India regarding the population of deaf persons. “In 1970 Taylor and Taylor estimated the Deaf population in India as 2 million (1970). In the 1981 census , the ‘hearing disabled’ of age 5 and above was estimated at 6,315,761. ‘Hearing disabled’ was defined to include those with complete hearing loss to moderate hearing loss. Gopinath (1998) estimated the 1991 Deaf population at 7,770,753 by extrapolating from the 1981 census. Neither the 1991 nor the 2001 census included ‘disabilities’ as a category, so a current estimate must be based on the ratio of Deaf to the total population of India which was estimated to be 1.08 billion” (quoted in Johnson and Johnson, 2008. P.8). Other data reveal that over 25,000 children are born deaf every year across the 3.28 million sq. Km. of India .

According to the Census of India (2001), the total number of individuals with hearing loss in the NE states was estimated to be 78,356. The total number of persons identified as having ‘hearing loss’ in the NE states is 1, 64,280, with Assam having the highest number among the eight states (Census data on Disabled Population (2011)). In view of the size of the deaf population in India, the deaf education in NE states must be understood within the context of the larger hearing community and its struggle to coexist in a pluralistic society. The Deaf native signers although maybe smaller in number should qualify for the status of a ‘minority’ group.

Sign language in the NE region

The most striking feature of NE states is its linguistic situation and 70% of India’s spoken languages are found in this region. As per the Census, 2011 the NE region is home to 122 languages, with Arunachal Pradesh having the highest number of 90 languages. Amongst the 122 languages, 4 come in the category of scheduled languages. English is the official language in Nagaland, Mizoram, Meghalaya and Sikkim except in the states of Assam and Tripura. In Assam, Assamese and English are the official languages of the state. Although all the tribal languages have the same equal status as a language, only 27 tribal languages have found a place in the school curriculum in their respective states. The Deaf people co-exist with these varied linguistic communities who are themselves struggling to empower their own languages and fighting for linguistic survival in the globalised world.

Sign language has received little attention in India and most of the research in the field of linguistics and language has focused only on spoken languages. Special emphasis has been given to languages categorised as minor/tribal languages through government projects and schemes under different nomenclatures to preserve and protect these ‘endangered’ languages. The People’s Linguistic Survey of India (Devy, 2012), whose major focus was on documenting languages of indigenous and minority communities have begun to document ISL. However, it does not provide any details regarding the number of sign languages operating in India, and only discusses the possibilities of variations of ISL across the country. It can only be assumed that different varieties of sign language exist in the NE states. In this regard, linguistic observation is limited only to variations at the lexical level.

In Arunachal Pradesh, homesigns (gestural form of communication at home) and the local variety of sign language also emerged amongst the deaf children, despite the influence of speech and the oral method of teaching in schools. In the Deaf Biblical Society, a residential school in Nagaland, the American Sign Language (ASL) was introduced by a Reverend Waling the founder of the school who had learnt the language from an American deaf signer, Bruce Swalbe at Bengaluru in India. Teachers in the school have also been trained in ASL. Although, d/Deaf individuals from the state of Nagaland claim to use ASL alone, there are evidences of their own signs that relate to their religion, tradition, culture, food habits and so on.

In the state of Mizoram, besides the influence of Indian Sign Language (ISL), it also has its own regional variety of signs that are similar to ASL. In fact, the social welfare department of Mizoram has documented the language in a glossary format (Rehabilitation Spastic Society, 2004).

Teachers from the government schools in Tripura do not have exposure to any type of sign language though teachers claim to use ISL. They have their own locally devised sign language introduced by the Deaf children themselves and this can be found in Ferrando Speech and Hearing Centre (FSHC) which is one of the residential schools.

A few teachers from the only government school in Sikkim known as the Special School for the Deaf, Social Justice Empowerment, have been trained in basic level of ISL. A deaf teacher (a native signer) is also one of the teachers teaching in the school. He was educated in Darjeeling and thus, his language may be a variety of Kolkata. In Manipur, the government school teachers have a lesser exposure to ISL (NERIE-NCERT, 2008) despite having a Deaf teacher in their midst. The Deaf teacher communicates in sign language fluently and naturally with the d/Deaf students in the school and the effects of this needs further investigation.

In Meghalaya, the deaf community consists of a small group of children who are either prelingually or postlingually deaf. The sign language in Meghalaya, ShSL, has emerged from a group of children in special schools. Deaf individuals and children have remained isolated from each other and there are no records of a deaf community prior to the development of these special schools. The social conditions within which ShSL has emerged is similar to the case of Nicaraguan Sign Language (Senghas and Kegl, 1994 quoted in Wallang, 2014 and 2015). These children find a sense of oneness in the residential schools rather than in their homes. This sense of belonging comes from the one common behaviour which they share amongst them— their language. The school binds them into a unique cultural group.

Sign languages are often used by the Deaf community in platforms such as residential schools, deaf associations, deaf clubs, etc. Since the natural sign language used in such places offer constant access to the language users, the shared sign language may be considered as ‘heritage language’. Compton, 2014 also notes that ‘the fulcrum of heritage in this light is a familial tie to the language irrespective of an individual proficiency in that language. Considering the dominance of oral education, the inordinate focus on English, and the method of ‘total communication’ used in schools, any ‘heritage sign languages’ of the NE states that may exist will be strongly influenced by the prominent borrowing from major sign languages use in metropolitan cities and ASL (Wallang, 2017).

ShSL comprises of three different varieties of ISL – Kolkata, Mumbai and New Delhi. It has emerged from a group of deaf individuals who were residing in the residential schools (FSCH and SCHH). BSL fingerspelling was initially introduced in these schools but today, the Coimbatore variety is being used in the School and Centre for the Hearing Handicapped (SCHH), and the Mumbai variety in FSHC (Wallang, 2015). In the case of Assam however, the sign language used by the deaf signers is largely influence by the dominant sign langauges— the Kolkata variety and the New Delhi variety of ISL.

Nevertheless, the spoken languages of this NE region have neither influence nor any kind of relationship to the signs. For example, in Shillong, the word Jainsem in Khasi indicates a woman’s cultural attire. Within the signing community, the word is defined according to how it is exactly worn rather than as it is defined in English, as two pieces of various types of material pinned across a woman’s shoulders. To the Deaf community, this sign JAINSEM also serves as a symbol for the Khasi community.

It is very difficult to determine the nature of sign language operating in the deaf community in different areas of NE states, or to determine the time of their emergence. “There are no records except for the incidence of the deafness in high iodine deficiency belt across the Himalayas and sub montane regions. The incidence of deafness in the Naga hills of Assam a century ago was reportedly eight times higher than the census average for India, with some villages where every second person [was] either deaf or dumb, or ‘insane’” (Allen, 1905, 37, qtd in Miles, 2001). Compton, 2014 notes that the number of speakers or signers of any language is difficult to determine because one must decide where to draw boundaries between language varieties at the same time decide who counts as a language user of the language.

The children residing in the hills and valleys of the interior areas face a serious communication gap with both the village hearing community (dialect speakers) and the deaf community in the urban areas, and thus, they remain largely isolated. The local languages, English language and even Sign language are foreign to such a group. There is hardly any access to information because of the difficult terrain of these areas. The majority of them are not enrolled in schools and usually drop out from school before the end of the primary level. Sometimes, parents cannot send their children to schools because their area does not even have roads to connecting with other villages.

Hence, the topographical nature of the NE states is one of the major hindrances of accessing information for many sections of the ‘disabled’, particularly those living in such interior areas. “Regarding accessibility for persons with other kinds of disabilities such as sign language accessibility for persons with hearing disabilities or braille accessibility for persons with vision disabilities, no information could be identified” (Deepak, 2016 p.44) ). Thus, community awareness programmes in such areas are needed to promote better access to information and knowledge for such individuals.

Brief Profile of Sign Language in Schools

Assam Arunachal Pradesh Manipur Meghalaya Mizoram Nagaland Sikkim Tripura
Types of Sign Language ISL(Indian Sign Language) Homesigns & Gestures Homesigns & Gestures Meghalaya Sign Language: A variety of ISL Mizo Sign Language ASL(American Sign Language) ISL ISL
Method of Teaching Total Communication Oral Approach Total Communication Oral Approach Total Communication Total Communication Total Communication ASL Total Communication
Availability of Professional Sign Language Interpreters Yes Nil Yes ( only 1 school) Yes
Types of training for Teachers trained in Sign Language Short term Short term Short term Short term Short term Short term Short term

References

Abbi, Anvita. "Reduplication in South Asian languages: An Areal, Typological and Historical Perspective." Springer, 2017.

Battison, Robbin. "Phonological rules of American Sign Language." Sign Language Studies, vol. 8, no. 4, 1978, pp. 263-266.

Battison, Robbin. "Sign language phonology." Sign Language Studies, vol. 1, no. 1, 1974, pp. 19-33.

Brentari, Diane. "A prosodic model of sign language phonology." MIT Press, 1998.

Cohen, H., Padden, C., & Perlmutter, D. (1978). A Manual Alphabet Approach to the Teaching of Reading to Deaf Children. The American Annals of the Deaf, 123(5), 500-510.

Emmorey, Karen. "Classifier predicates in American Sign Language." The elements: A para session on linguistic units and levels, 2002, pp. 287-309.

Johnston, Trevor. "Auslan: The sign language of the Australian deaf community." Cambridge University Press, 1989.

Kendon, A. (1985). Gesture and speech: How they interact. Annual Review of Anthropology, 15(1), 283-309.

Kendon, A. (2002). Gesture: Visible Action as Utterance. Cambridge University Press.

Klima, Edward S., and Ursula Bellugi. "The Signs of Language." Harvard University Press, 1979.

Kyle, Jim, and Bencie Woll. "Sign language phonology." Linguistics, vol. 18, no. 2, 1980, pp. 201-260.

Kyle, Jim, and Bencie Woll. "Sign language phonology: The British Sign Language Perspective." Linguistics, vol. 23, no. 2, 1985, pp. 189-229.

Liddell, Scott K., and Robert E. Johnson. "American Sign Language: The phonological base." Sign Language Studies, vol. 64, no. 1, 1989, pp. 197-225.

Maher, J. (1996). The sign language studies reader. Washington, D.C: Gallaudet University Press.

Meier, R. P., Cormier, K., & Quinto-Pozos, D. (2002). Modality and structure in signed and spoken languages. Cambridge University Press.

NERIE-NCERT 2015. A Study on Issues & Challenges of Hearing Impaired and Deaf Students in Learning English. PAC Research NERIE-NCERT 2022-24. Research Cum Documentation of Sign Language for Teaching of English, Maths and Social Sciences in the NE Region. PAC Research

Petitto, L. A. (1994). On the autonomy of language and gesture: Evidence from the acquisition of personal pronouns in American Sign Language. Cognition, 50(1-3), 1-46.

Pfau, R., & Steinbach, M. (2006). Sign language: An international handbook. De Gruyter Mouton.

Stokoe, William C. "Cheremes." Sign Language Studies, vol. 8, no. 1, 1965, pp. 55-60.

Stokoe, William C. "Cherology." Linguistic description of American Sign Language, 1960.

Sinha, S. (2017). Indian Sign Language: An Analysis of Its Grammar. Gallaudet University Press.

Supalla, Ted. "The classifier system in American Sign Language." University of Chicago Press, 1995.

Sutton-Spence, Rachel, and Bencie Woll. "The linguistics of British Sign Language: An introduction." Cambridge University Press, 1999.

Valli, Clayton, and Ceil Lucas. "Linguistics of American Sign Language: An introduction." Gallaudet University Press, 1992.

Wallang, G. Melissa Sign Linguistics and Language Education for the Deaf : An Overview of North-East Region, 174pp. Academic Excellence, 2007, xviii ISBN: 81-89901-20-2

Wallang, G. Melissa An Introduction to Sign Language: A Visual Dictionary, 258 pp. Lakshi Publication, New Delhi. ISBN 978-93-8212054-4

Wallang, M. G. (2022). Linguistic Accessibility an Approach to Deaf Education In. Journal of Contemporary Concerns and Challenges in Education, Principal, NERIE-NCERT, Offset Panorama Printing Press, Shillong.

Wallang, M. G. (2022). Sign Language in Multilingual Context: The Dilemma of Deaf People in School Education. In NEP 2020: Indian Languages, Arts & Culture. Eds. Saryug Yadav & Ram Niwas, 2022 ISBN:978-81-956411-54

Wallang, M. G. (2019). Sustaining Digital Language Resources and Sign Language, Indian Journal of Educational Technology. CIET, NCERT.

Wilcox, S. (2004). Gesture and Sign Language in Human-Computer Interaction. Gesture-Based Communication in Human-Computer Interaction, 118-131.

Zeshan, Ulrike. "Sign languages of the world: A comparative handbook." De Gruyter Mouton, 2010.