It is commonly assumed by non-linguists that all languages have vocabulary systems in which the words themselves differ in sound-form but refer to reality in the same way. From this assumption it follows that for every word in the mother tongue there is an exact equivalent in the foreign language. It is a belief which is reinforced by the small bilingual dictionaries where single word translations are often offered. Language learning however cannot be just a matter of learning to substitute a new set of labels for the familiar ones of the mother tongue.
Firstly, it should be borne in mind that though objective reality exists outside human beings and irrespective of the language they speak every language classifies reality in its own way by means of vocabulary units. In English, e.g., the word footis used to denote the extremity of the leg. In Russian there is no exact equivalent for foot.The word нога denotes the whole leg including the foot.
1 See, e. g., Ch. Fries. Teaching and Learning English as a Foreign Language. University of Michigan Press, 1963, p. 9.
Classification of the real world around us provided by the vocabulary units of our mother tongue is learned and assimilated together with our first language. Because we are used to the way in which our own language structures experience we are often inclined to think of this as the only natural way of handling things whereas in fact it is highly arbitrary. One example is provided by the words watchand clock.It would seem natural for Russian speakers to have a single word to refer to alldevices that tell us what time it is; yet in English they are divided into two semantic classes depending on whether or not they are customarily portable. We also find it natural that kinship terms should reflect the difference between male and female: brotheror sister, father or mother, uncleor aunt,etc. yet in English we fail to make this distinction in the case of cousin(cf. the Russian — двоюродный брат, двоюродная сестра). Contrastive analysis also brings to light what can be labelled problem pairs, i.e. the words that denote two entities in one language and correspond to two different words in another language.
Compare, for example часы in Russian and clock, watchin English, художник in Russian and artist, painterin English.
Each language contains words which cannot be translated directly from this language into another. For example, favourite examples of untranslatable German words are gemütlich(something like ‘easygoing’, ‘humbly pleasant’, ‘informal’) and Schadenfreude (‘pleasure over the fact that someone else has suffered a misfortune’). Traditional examples of untranslatable English words are sophisticatedand efficient.
This is not to say that the lack of word-for-word equivalents implies also the lack of what is denoted by these words. If this were true, we would have to conclude that speakers of English never indulge in Shadenfreude and that there are no sophisticated Germans or there is no efficient industry in any country outside England or the USA.
If we abandon the primitive notion of word-for-word equivalence, we can safely assume, firstly, that anything which can be said in one language can be translated more or less accurately into another, secondly, that correlated polysemantic words of different languages are as a rule not co-extensive. Polysemantic words in all languages may denote very different types of objects and yet all the meanings are considered by the native speakers to be obviously logical extensions of the basic meaning. For example, to an Englishman it is self-evident that one should be able to use the word headto denote the following:
head
{
of a person of a bed of a coin of a cane
head
{
of a match of a table of an organisation
whereas in Russian different words have to be used: голова, изголовье, сторона, головка, etc.
The very real danger for the Russian language learner here is that having learned first that headis the English word which denotes a part
of the body he will assume that it can be used in all the cases where the Russian word голова is used in Russian, e.g. голова сахара (‘a loaf of sugar’), городской голова (‘mayor of the city’), он парень с головой (‘he is a bright lad’), в первую голову (‘in the first place’), погрузиться во что-л. с головой (‘to throw oneself into smth.’), etc., but will never think of using the word headin connection with ‘a bed’ or ‘a coin’. Thirdly, the meaning of any word depends to a great extent on the place it occupies in the set of semantically related words: its synonyms, the constituents of the lexical field the word belongs to, other members of the word-family which the word enters, etc.
Thus, e.g., in the English synonymic set brave, courageous, bold, fearless, audacious, valiant, valorous, doughty, undaunted, intrepideach word differs in certain component of meaning from the others, braveusually implies resolution and self-control in meeting without flinching a situation that inspires fear, courageousstresses stout-hearted-ness and firmness of temper, boldimplies either a temperamental liking for danger or a willingness to court danger or to dare the unknown, etc. Comparing the corresponding Russian synonymic set храбрый, бесстрашный, смелый, мужественный, отважный, etc. we see that the Russian word смелый, e.g., may be considered as a correlated word to either brave, valiantor valorousand also that no member of the Russian synonymic set can be viewed as an exact equivalent of any single member of the English synonymic set in isolation, although all of them denote ‘having or showing fearlessness in meeting that which is dangerous, difficult, or unknown’. Different aspects of this quality are differently distributed among the words making up the synonymic set. This absence of one-to-one correspondence can be also observed if we compare the constituents of the same lexico-semantic group in different languages. Thus, for example, let us assume that an Englishman has in his vocabulary the following words for evaluating mental aptitude: apt, bright, brilliant, clever, cunning, intelligent, shrewd, sly, dull, stupid, slow, foolish, silly.Each of these words has a definite meaning for him. Therefore each word actually represents a value judgement. As the Englishman sees a display of mental aptitude, he attaches one of these words to the situation and in so doing, he attaches a value judgement. The corresponding Russian semantic field of mental aptitude is different (cf. способный, хитрый, умный, глупый, тупой, etc.), therefore the meaning of each word is slightly different too. What Russian speakers would describe as хитрый might be described by English speakers as either cunning or slydepending on how they evaluate the given situation.
The problem under discussion may be also illustrated by the analysis of the members of correlated word-families, e.g., cf. голова, головка, etc. head, heady,etc. which are differently connected with the main word of the family in each of the two languages and have different denotational and connotational components of meaning. This can be easily observed in words containing diminutive and endearing suffixes, e.g. the English word head, grandfather, girland others do not possess the connotative component which is part of the meaning of the Russian words головка, головушка, головёнка, дедушка, дедуля, etc.
Thus on the lexical level or to be more exact on the level of the lexical meaning contrastive analysis reveals that correlated polysemantic words are not co-extensive and shows the teacher where to expect an unusual degree of learning difficulty. This analysis may also point out the effective ways of overcoming the anticipated difficulty as it shows which of the new items will require a more extended and careful presentation and practice.
Difference in the lexical meaning (or meanings) of correlated words accounts for the difference of their collocability in different languages. This is of particular importance in developing speech habits as the mastery of collocations is much more important than the knowledge of isolated words.
Thus, e.g., the English adjective new and the Russian adjective новый when taken in isolation are felt as correlated words as in a number of cases new stands for новый, e.g. новое платье — a new dress, Новый Год — New Year. In collocation with other nouns, however, the Russian adjective cannot be used in the same meaning in which the English word new is currently used. Compare, e.g., new potatoes — молодая картошка, new bread — свежий хлеб, etc.
The lack of co-extension may be observed in collocations made up by words belonging to different parts of speech, e.g. compare word-groups with the verb to fill:
to fill a lamp — заправлять лам- to fill a truck — загружать ма-
ny шину
to fill a pipe — набивать трубку to fill a gap — заполнять пробел
As we see the verb to fill in different collocations corresponds to a number of different verbs in Russian. Conversely one Russian word may correspond to a number of English words.
For instance compare тонкая книга — a thin book тонкая ирония — subtle irony тонкая талия — slim waist
Perhaps the greatest difficulty for the Russian learners of English is the fact that not only notional words but also function words in different languages are polysemantic and not co-extensive. Quite a number of mistakes made by the Russian learners can be accounted for by the divergence in the semantic structure of function words. Compare, for example, the meanings of the Russian preposition до and its equivalents in the English language.
(Он работал) до 5 часов till 5 o'clock
(Это было) до войны before the war
(Он дошел) до угла to the corner
Contrastive analysis on the level of the grammatical meaning reveals that correlated words in different languages may differ in the grammatical component of their meaning.
To take a simple instance Russians are liable to say the *news are good, *the money are on the table, *her hair are black,etc. as the words
новости, деньги, волосы have the grammatical meaning of plurality in the Russian language.
Of particular interest in contrastive analysis are the compulsory grammatical categories which foreign language learners may find in the language they are studying and which are different from or nonexistent in their mother tongue. These are the meanings which the grammar of the language “forces” us to signal whether we want it or not.
One of the compulsory grammatical categories in English is the category of definiteness/indefiniteness. We know that English signals this category by means of the articles. Compare the meaning of the word man in the manis honest and man is honest.
As this category is non-existent in the Russian language it is obvious that Russian learners find it hard to use the articles properly.
Contrastive analysis brings to light the essence of what is usually described as idiomatic English, idiomatic Russian etc., i.e. the peculiar way in which every language combines and structures in lexical units various concepts to denote extra-linguistic reality.
The outstanding Russian linguist acad. L. V. Sčerba repeatedly stressed the fact that it is an error in principle if one supposes that the notional systems of any two languages are identical. Even in those areas where the two cultures overlap and where the material extralinguistic world is identical, the lexical units of the two languages are not different labels appended to identical concepts. In the overwhelming majority of cases the concepts denoted are differently organised by verbal means in the two languages. Different verbal organisation of concepts in different languages may be observed not only in the difference of the semantic structure of correlated words but also in the structural difference of word-groups commonly used to denote identical entities.
For example, a typical Russian word-group used to describe the way somebody performs an action, or the state in which a person finds himself, has the structure that may be represented by the formula adverb followed by a finite form of a verb (or a verb + an adverb), e.g. он крепко спит, он быстро /медленно/ усваивает, etc. In English we can also use structurally similar word-groups and say he smokes a lot, he learnsslowly(fast), etc. The structure of idiomatic English word-groups however is different. The formula of this word-group can be represented as an adjective + deverbal noun, e.g. he is a heavy smoker, a poor learner,e.g. “the Englishman is a slow starter but there is no stronger finisher” (Galsworthy). Another English word-group used in similar cases has the structure verb to be + adjective + the infinitive, e.g. (He) is quick to realise,(He) is slow to cool down, etc. which is practically non-existent in the Russian language. Commonly used English words of the type (he is) an early-riser, a music-lover,etc. have no counterparts in the Russian language and as a rule correspond to phrases of the type (Он) рано встает, (он) очень любит музыку, etc.1
See ‘Word-Formation’, § 34, p. 151,
Last but not least contrastive analysis deals with the meaning and use of situational verbal units, i.e. words, word-groups, sentences which are commonly used by native speakers in certain situations.
For instance when we answer a telephone call and hear somebody asking for a person whose name we have never heard the usual answer for the Russian speaker would be Вы ошиблись (номером), Вы не туда попали. The Englishman in identical situation is likely to say Wrong number.When somebody apologises for inadvertently pushing you or treading on your foot and says Простите\ (I beg your pardon. Excuse me.)the Russian speaker in reply to the apology would probably say — Ничего, пожалуйста, whereas the verbal reaction of an Englishman would be different — It’s all right. It does not matter. * Nothingor *pleasein this case cannot be viewed as words correlated with Ничего, Пожалуйста."
To sum up contrastive analysis cannot be overestimated as an indispensable stage in preparation of teaching material, in selecting lexical items to be extensively practiced and in predicting typical errors. It is also of great value for an efficient teacher who knows that to have a native like command of a foreign language, to be able to speak what we call idiomatic English, words, word-groups and whole sentences must be learned within the lexical, grammatical and situational restrictions of the English language.
§ 2. Statistical Analysis
An important and promising trend in modern linguistics which has been making
progress during the last few decades is the quantitative study of language phenomena and the application of statistical methods in linguistic analysis.
Statistical linguistics is nowadays generally recognised as one of the major branches of linguistics. Statistical inquiries have considerable importance not only because of their precision but also because of their relevance to certain problems of communication engineering and information theory.
Probably one of the most important things for modern linguistics was the realisation of the fact that non-formalised statements are as a matter of fact unverifiable, whereas any scientific method of cognition presupposes verification of the data obtained. The value of statistical methods as a means of verification is beyond dispute.
Though statistical linguistics has a wide field of application here we shall discuss mainly the statistical approach to vocabulary.
Statistical approach proved essential in the selection of vocabulary items of a foreign language for teaching purposes.
It is common knowledge that very few people know more than 10% of the words of their mother tongue. It follows that if we do not wish to waste time on committing to memory vocabulary items which are never likely to be useful to the learner, we have to select only lexical units that are commonly used by native speakers. Out of about 500,000 words listed in the OED the “passive” vocabulary of an educated Englishman comprises no more than 30,000 words and of these 4,000 — 5,000
are presumed to be amply sufficient for the daily needs of an average member of the English speech community. Thus it is evident that the problem of selection of teaching vocabulary is of vital importance.1 It is also evident that by far the most reliable single criterion is that of frequency as presumably the most useful items are those that occur most frequently in our language use.
As far back as 1927, recognising the need for information on word frequency for sound teaching materials, Ed. L. Thorndike brought out a list of the 10,000 words occurring most frequently in a corpus of five million running words from forty-one different sources. In 1944 the extension was brought to 30,000 words.2
Statistical techniques have been successfully applied in the analysis of various linguistic phenomena: different structural types of words, affixes, the vocabularies of great writers and poets and even in the study of some problems of historical lexicology.
Statistical regularities however can be observed only if the phenomena under analysis are sufficiently numerous and their occurrence very frequent. Thus the first requirement of any statistic investigation is the evaluation of the size of the sample necessary for the analysis.
To illustrate this statement we may consider the frequency of word occurrences.
It is common knowledge that a comparatively small group of words makes up the bulk of any text.3 It was found that approximately 1,300 — 1,500 most frequent words make up 85% of all words occurring in the text. If, however, we analyse a sample of 60 words it is hard to predict the number of occurrences of most frequent words. As the sample is so small it may contain comparatively very few or very many of such words. The size of the sample sufficient for the reliable information as to the frequency of the items under analysis is determined by mathematical statistics by means of certain formulas.
It goes without saying that to be useful in teaching statistics should deal with meanings as well as sound-forms as not all word-meanings are equally frequent. Besides, the number of meanings exceeds by far the number of words. The total number of different meanings recorded and illustrated in OED for the first 500 words of the Thorndike Word List is 14,070, for the first thousand it is nearly 25,000. Naturally not all the meanings should be included in the list of the first two thousand most commonly used words. Statistical analysis of meaning frequencies resulted in the compilation of A General Service List of English Words with Semantic Frequencies. The semantic count is a count of the frequency of the occurrence of the various senses of 2,000 most frequent words as found in a study of five million running words. The semantic count is based on the differentiation of the meanings in the OED and the
’ 1See ‘Various Aspects ...’, § 14, p. 197; ‘Fundamentals of English Lexicography, § 6, p. 216.
2 The Teacher’s Word Book of 30,000 Words by Edward L. Thorndike and Irvin Lorge. N. Y., 1963. See also M. West. A General Service List of English Words. L., 1959, pp. V-VI.
3 See ‘Various Aspects ...’, § 14, p. 197.
frequencies are expressed as percentage, so that the teacher and textbook writer may find it easier to understand and use the list. An example will make the procedure clear.
room (’space’) takes less room, not enough room to turn round (in) make room for (figurative) room for improvement
}
12%
come to my room, bedroom, sitting room; drawing room, bathroom
}
83%
(plural = suite, lodgings) my room in college to let rooms
}
2%
It can be easily observed from the semantic count above that themeaning ‘part of a house’ (sitting room, drawing room,etc.) makes up 83% of all occurrences of the word roomand should be included in the list of meanings to be learned by the beginners, whereas the meaning ’suite, lodgings’ is not essential and makes up only 2% of all occurrences of this word.
Statistical methods have been also applied to various theoretical problems of meaning. An interesting attempt was made by G. K. Zipfto study the relation between polysemy and word frequency by statistical methods. Having discovered that there is a direct relationship between the number of different meanings of a word and its relative frequency of occurrence, Zipf proceeded to find a mathematical formula for this correlation. He came to the conclusion that different meanings of a word will tend to be equal to the square root of its relative frequency (with the possible exception of the few dozen most frequent words). This was summed up in the following formula where m stands for the number of meanings, F for relative frequency — tn — F1/2. This formula is known as Zipf’s law.
Though numerous corrections to this law have been suggested, still there is no reason to doubt the principle itself, namely, that the more frequent a word is, the more meanings it is likely to have.
One of the most promising trends in statistical enquiries is the analysis of collocability of words. It is observed that words are joined together according to certain rules. The linguistic structure of any string of words may be described as a network of grammatical and lexical restrictions.1
The set of lexical restrictions is very complex. On the standard probability scale the set of (im)possibilities of combination of lexical units range from zero (impossibility) to unit (certainty).
Of considerable significance in this respect is the fact that high frequency value of individual lexical items does not forecast high frequency of the word-group formed by these items. Thus, e.g., the adjective ableand the noun manare both included in the list of 2,000 most frequent words, the word-group an able man,however, is very rarely used.
1 Set ‘Word-Groups and Phraseological Units’, §§ 1, 2, pp. 64,66, 244
The importance of frequency analysis of word-groups is indisputable as in speech we actually deal not with isolated words but with word-groups. Recently attempts have been made to elucidate this problem in different languages both on the level of theoretical and applied lexicology and lexicography.
It should be pointed out, however, that the statistical study of vocabulary has some inherent limitations.
Firstly, statistical approach is purely quantitative, whereas most linguistic problems are essentially qualitative. To put it in simplar terms quantitative research implies that one knows what to count and this knowledge is reached only through a long period of qualitative research carried on upon the basis of certain theoretical assumptions.
For example, even simple numerical word counts presuppose a qualitative definition of the lexical items to be counted. In connection with this different questions may arise, e.g. is the orthographical unit workto be considered as one word or two different words: workn— (to) workv. Are all word-groups to be viewed as consisting of so many words or are some of them to be counted as single, self-contained lexical units? We know that in some dictionaries word-groups of the type by chance, at large, in the long run,etc. are counted as one item though they consist of at least two words, in others they are not counted at all but viewed as peculiar cases of usage of the notional words chance, large, run,etc. Naturally the results of the word counts largely depend on the basic theoretical assumption, i.e. on the definition of the lexical item.1
We also need to use qualitative description of the language in deciding whether we deal with one item or more than one, e.g. in sorting out two homonymous words and different meanings of one word.2 It follows that before counting homonyms one must have a clear idea of what difference in meaning is indicative of homonymy. From the discussion of the linguistic problems above we may conclude that an exact and exhaustive definition of the linguistic qualitative aspects of the items under consideration must precede the statistical analysis.
Secondly, we must admit that not all linguists have the mathematical equipment necessary for applying statistical methods. In fact what is often referred to as statistical analysis is purely numerical counts of this or that linguistic phenomenon not involving the use of any mathematical formula, which in some cases may be misleading.
Thus, statistical analysis is applied in different branches of linguistics including lexicology as a means of verification and as a reliable criterion for the selection of the language data provided qualitative description of lexical items is available.
§ 3. Immediate Constituents Analysis
The theory of Immediate Constituents (IC) was originally elaborated as an attempt to determine the ways in which lexical units are relevantly related to one another. It was discovered that combinations of such units are usually structured into
1 See also ‘Various Aspects ...’, § 12, p. 195,
2 See ‘Semasiology’, §§ 37, 38, pp. 43, 44.
hierarchically arranged sets of binary constructions. For example in the word-group a black dress in severe stylewe do not relate ato black, black to dress, dressto in,etc. but set up a structure which may be represented as a black dress / in severe style.Thus the fundamental aim of IC analysis is to segment a set of lexical units into two maximally independent sequences or ICs thus revealing the hierarchical structure of this set. Successive segmentation results in Ultimate Constituents (UC), i.e. two-facet units that cannot be segmented into smaller units having both sound-form and meaning. The Ultimate Constituents of the word-group analysed above are: a| black | dress | in | severe| style.
The meaning of the sentence, word-group, etc. and the IC binary segmentation are interdependent. For example, fat major’s wifemay mean that either ‘the major is fat’ or ‘his wife is fat’. The former semantic interpretation presupposes the IC analysis into fat major’s | wife,whereas the latter reflects a different segmentation into IC’s and namely fat| major’s wife.
Itmust be admitted that this kind of analysis is arrived at by reference to intuition and it should be regarded as an attempt to formalise one’s semantic intuition.
It is mainly to discover the derivational structure of words that IC analysis is used in lexicological investigations. For example, the verb denationalisehas both a prefix de-and a suffix -ise (-ize).To decide whether this word is a prefixal or a suffixal derivative we must apply IC analysis.1 The binary segmentation of the string of morphemes making up the word shows that *denationor *denationalcannot be considered independent sequences as there is no direct link between the prefix de-and nationor national.In fact no such sound-forms function as independent units in modern English. The only possible binary segmentation is de| nationalise,therefore we may conclude that the word is a prefixal derivative. There are also numerous cases when identical morphemic structure of different words is insufficient proof of the identical pattern of their derivative structure which can be revealed only by IC analysis. Thus, comparing, e.g., snow-coveredand blue-eyedwe observe that both words contain two root-morphemes and one derivational morpheme. IC analysis, however, shows that whereas snow-coveredmay be treated as a compound consisting of two stems snow + covered, blue-eyedis a suffixal derivative as the underlying structure as shown by IC analysis is different, i.e. (blue+eye)+-ed.