Relativism

Bob Ritter [Henry Czerny]:
You are such a Boy Scout! You see everything in black and white!

Jack Ryan [Harrison Ford]:
No, no, no! Not black and white, Ritter, right and wrong!

Clear and Present Danger [Paramount Pictures, 1994]

The inescapable conclusion is that subjectivity, relativity and irrationalism are advocated [by Richard Rorty] not in order to let in all opinions, but precisely so as to exclude the opinions of people who believe in old authorities and objective truths. This is the short cut to [Antonio] Gramsci's new cultural hegemony:  not to vindicate the new culture against the old, but to show that there are no grounds for either, so that nothing remains save political commitment.

Thus, almost all those who espouse the relativistic 'methods' introduced into the humanities by Foucault, Derrida and Rorty are vehement adherents to a code of political correctness that condemns deviation in absolute and intransigent terms. The relativistic theory exists in order to support an absolutist doctrine. We should not be surprised therefore at the extreme disarray that entered the camp of deconstruction, when it was discovered that one of the leading ecclesiastics, Paul de Man, once had Nazi sympathies. It is manifestly absurd to suggest that a similar disarray would have attended the discovery that Paul de Man had once been a communist -- even if he taken part in some of the great communist crimes. In such a case he would haved enjoyed the same compassionate endorsement as was afforded to [György] Lukács, [Maurice] Merleau-Ponty and Sartre.

Roger Scruton, Fools, Frauds and Firebrands, Thinkers of the New Left, Bloomsbury, 2015, pp.236-237; boldface added.


Sharp fluctuations of moral absolutism and moral relativism are also among the attitudes of intellectuals revealed in this study. The moral absolutism is reserved for the stern judgments of their own society, while a pragmatic moral relativism appears when they give the benefit of the doubt to certain dictators and their political systems as long as they find them fundamentally praiseworthy and well intentioned. It follows that the centrality and consistent use of the critical faculties of intellectuals has often been overestimated.

Paul Hollander, From Benito Mussolini to Hugo Chavez, Intellectuals and a Century of Political Hero Worship, Cambridge University Press, 2016, p.14.


But the philosophy that killed off truth proclaims unlimited tolerance for the 'language games' (i.e., opinions, beliefs and doctrines) that people find useful. The outcome is expressed in the words of Karl Kraus:  'Alles ist wahr und auch das Gegenteil.' 'Everything is true, and also its opposite.'

Leszek Kołakowski (1927-2009), "Our Merry Apocalypse," 1997, Is God Happy? Selected Essays, Basic Books, 2013, p.318.


φησὶ γάρ που πάντων χρημάτων μέτρον ἄνθρωπον εἶναι,
τῶν μὲν ὄντων, ὠς ἔστι, τῶν δὲ μὴ ὄντων, ὡς οὐκ ἔστιν·

He says somewhere that man is the measure of all things,
of the existing, that they are, and of the non-existing, that they are not.

Socrates quoting Protagoras of Abdera, Plato, Theaetetus, 152a, Theaetetus and Sophist, translated by Harold North Fowler, Loeb Classical Library, Harvard University Press, 1921, 1961, p.40-41, translation modified.

The first clear statement of relativism comes with the Sophist Protagoras of Abdera, Πρωταγόρας ὁ Ἀβδηρίτης, as quoted by Plato:

ὡς οἷα μὲν ἕκαστα ἐμοὶ φαίνεται, τοιαῦτα μὲν ἔστιν ἐμοί, οἷα δὲ σοί, τοιαῦτα δὲ αὖ σοί.

The way things appear to me, in that way they exist for me; and the way things appears to you, in that way they exist for you" [Theaetetus 152a]

Thus, however I see things, that is actually true -- for me. If you see things differently, then that is true -- for you. There is no separate or objective truth apart from how each individual happens to see things. Consequently, Protagoras says that there is no such thing as falsehood. Unfortunately, this would make Protagoras's own profession meaningless, since his business is to teach people how to persuade others of their own beliefs. It would be strange to tell others that what they believe is true but that they should accept what you say nevertheless. So Protagoras qualified his doctrine: while whatever anyone believes is true, things that some people believe may be better than what others believe.

Plato thought that such a qualification reveals the inconsistency of the whole doctrine. His basic argument against relativism is called the "Turning the Tables" (Peritropé, "turning around") argument, and it goes something like this: "If the way things appear to me, in that way they exist for me, and the way things appears to you, in that way they exist for you, then it appears to me that your whole doctrine is false." Since anything that appears to me is true, then it must be true that Protagoras is wrong [note]. Relativism thus has the strange logical property of not being able to deny the truth of its own contradiction. Indeed, if Protagoras says that there is no falsehood, then he cannot say that the opposite, the contradiction, of his own doctrine is false. Protagoras wants to have it both ways -- that there is no falsehood but that the denial of what he says is false -- and that is typical of relativism. And if we say that relativism simply means that whatever I believe is nobody else's business, then there is no reason why I should tell anybody else what I believe, since it is then none of my business to influence their beliefs.

So then, why bother even stating relativism if it cannot be used to deny opposing views? Protagoras's own way out that his view must be "better" doesn't make any sense either: better than what? Better than opposing views? But there are no opposing views, by relativism's own principle. And even if we can identify opposing views -- taking contradiction and falsehood seriously -- what is "better" supposed to mean? Saying that one thing is "better" than another is always going to involve some claim about what is actually good, desirable, worthy, beneficial, etc. What is "better" is supposed to produce more of what is a good, desirable, worthy, beneficial, etc.; but no such claims make any sense unless it is claimed that the views expressed about what is actually good, desirable, worthy, beneficial, etc. are true. If the claims about value are not supposed to be true, then it makes no difference what the claims are: they cannot exclude their opposites.

It is characteristic of all forms of relativism that they wish to preserve for themselves the very principles that they seek to deny to others. Thus, relativism basically presents itself as a true doctrine, which means that it will logically exclude its opposites (absolutism or objectivism), but what it actually says is that no doctrines can logically exclude their opposites. It wants for itself the very thing (objectivity) that it denies exists. Logically this is called "self-referential inconsistency," which means that you are inconsistent when it comes to considering what you are actually doing yourself. More familiarly, that is called wanting to "have your cake and eat it too." Someone who advocates relativism, then, may just have a problem recognizing how their doctrine applies to themselves.
"Here again we see the contrast
between a long history of struggling
with difficult logical issues and the
assertion by the race-gender-class
critics of a logically unsophisticated
position that is immediately contradicted
by their own actions. Although
theoretically against judgments of
literary value, they are, in practice,
perfectly content with their own;
having argued that hierarchies are
elitist, they nonetheless create one by
adding Alice Walker or Rigoberta
Menchu to their course reading lists.
They vacillate between the rejection of
all value judgments and the rejection of
one specific set of them -- that which
created the Western canon."

John M. Ellis, Literature Lost
[Yale University Press, 1997], p. 197

This problem turns up in many areas of dishonest intellectual or political argument, as in the box quote.

Modern relativists in philosophy, of course, can hardly fail at some point to have this brought to their attention. The strongest logical response was from Bertrand Russell, who tried to argue that nothing could logically refer to itself (called his "Theory of Logical Types" [note]). That was a move that defeated itself, since in presenting the Theory of Types, Russell can hardly avoid referring to the Theory of Types, which is to do something that he is in the act of saying can't be done or that doesn't make any sense [note]. In general, one need merely consider the word "word" and ask whether it refers to itself. Of course it does. The word "word" is a word. Other modern relativists in philosophy (e.g. Richard Rorty) try to pursue Protagoras's own strategy that their views are "better" rather than "true." Rorty sees this as a kind of Pragmatism, which is not concerned with what is true but just with what "works."

Pragmatism is really just a kind of relativism; and, as with Protagoras's own strategy, it is a smoke screen for the questions that ultimately must be asked about what it means that something is "better," or now that something "works." Something "works," indeed, if it gets us what we want -- or what Richard Rorty wants. But why should we want that? Again, the smoke screen puts off the fatal moment when we have to consider what is true about what is actually good, desirable, worthy, beneficial, etc. All these responses are diversions that attempt to obscure and prevent the examination of the assumptions that stand behind the views of people like Rorty. It is easier to believe what you believe if it is never even called into question, and that is just as true of academic philosophers like Rorty as it is for anybody else. Being intelligent or well educated does not mean that you are necessarily more aware of yourself, what you do, or the implications of what you believe. That is why the Delphic Precept, "Know Thyself" (Gnôthi seautón) is just as important now as ever.

Relativism turns up in many guises. Generally, we can distinguish cognitive relativism, which is about all kinds of knowledge, from moral relativism, which is just about matters of value. Protagoras's principle is one of cognitive relativism. This gives rise to the most conspicuous paradoxes, but despite that there are several important forms of cognitive relativism today: historicism is the idea that truth is relative to a given moment in history and that truth actually changes as history does. This derives from G.W.F. Hegel, although Hegel himself thought there was an absolute truth, which would come at the "end of history" -- where he happened to be himself, curiously. This kind of historicism was taken up by Karl Marx, who thought that every kind of intellectual structure -- religion, philosophy, ethics, art, etc. -- was determined by the economic system, the "mode of production," of a particular historical period. A claim to truth about anything in any area could therefore be simply dismissed once its economic basis was identified: labeling something "bourgeois ideology" means that we don't have to address its content. Like Hegel, however, Marx did think there was an absolute truth at the "end of history," when the economic basis of society permanently becomes communism. Modern Marxists, who don't seem to have noticed the miserable and terrible failure of every attempt to bring about Marx's communism, can hardly do without their absolutizing "end of history" [note]; but modern Hegelians (e.g. Robert Solomon) can create a more complete relativism by removing Hegel's idea that there is an "end" to history. Unfortunately, that creates for them the typical relativistic paradox, for their own theory of history no longer has any basis for its claim to be true for all of history. Hegel didn't make that kind of mistake.

Another modern kind of cognitive relativism is linguistic relativism, that truth is created by the grammar and semantic system of particular language. This idea in philosophy comes from Ludwig Wittgenstein, but it turns up independently in linguistics in the theory of Benjamin Lee Whorf. On this view the world really has no structure of its own, but that structure is entirely imposed by the structure of language. Learning a different language thus means in effect creating a new world, where absolutely everything can be completely different from the world as we know it. Wittgenstein called the rules established by a particular language a "game" that we play as we speak the language. As we "play" a "language game," we indulge in a certain "form of life." [note]

In linguistics, Whorf's theory has mostly been superseded by the views of Noam Chomsky that there are "linguistic universals," i.e. structures that are common to all languages. That would mean that even if language creates reality, reality is going to contain certain universal constants. In philosophy, on the other hand, Wittgenstein is still regarded by many as the greatest philosopher of the 20th century. But his theory cannot avoid stumbling into an obvious breach of self-referential consistency, for the nature of language would clearly be part of the structure of the world that is supposedly created by the structure of language. Wittgenstein's theory is just a theory about the nature of language, and as such it is merely the creation of his own language game. We don't have to play his language game if we don't want to. By his own principles, we can play a language game where the world has an independent structure, and whatever we say will be just as true as whatever Wittgenstein says. Thus, like every kind of relativism, Wittgenstein's theory cannot protect itself from its own contradiction. Nor can it avoid giving the impression of claiming for itself the very quality, objective truth, that it denies exists. If it does not make that claim, there is no reason why we need pay any attention to it.

Although Protagoras gives us a principle of cognitive relativism, his own main interest was for its consequences in matters of value. Relativism applied to value -- that truths of right and wrong, good and evil, and the beautiful and the ugly, are relative -- is usually called moral relativism. This is inherently a more plausible theory than a general cognitive relativism, for people disagree much more about matters of value than they do about matters of fact. And if we are talking about something like justice or goodness, it is much more difficult even to say what we are talking about than it is when we are talking about things like tables and chairs. We can point to the tables and chairs and assume that other people can perceive them, but we have a much tougher time pointing to justice and goodness. Nevertheless, moral relativism suffers from the same kinds of self-referential paradoxes as cognitive relativism, even if we divorce it from cognitive relativism and place it in a world of objective factual truths. We can see this happen in the most important modern form of moral relativism: cultural relativism.

Cultural relativism is based on the undoubted truth that human cultures are very different from each other and often embody very different values. If Italians and Arabs value female chastity and Tahitians and Californians don't, it is hard to see how we are going to decide between these alternatives, especially if we are Californians. A classic and formative moment in this kind of debate came when a young Margaret Mead went to Sâmoa and discovered that casual sex, non-violence, and an easygoing attitude in general made adolescence in Sâmoa very much different from adolescence back in the United States. Her conclusions are still widely read in her book Coming of Age in Samoa. These discoveries simply confirmed the views of Mead's teacher, Franz Boaz, that a culture could institute pretty much any system of values and that no culture could claim access to any absolute system of values beyond that. Since Boaz and Mead were anthropologists, this gave cultural relativism the dignity, not just of a philosophical theory, but of a scientific discovery. Strong statements about cultural relativism are also associated with another famous anthropologist, and friend of Mead's, Ruth Benedict. Today the anthropological empirical evidence that cultures are different is usually regarded as the strongest support for cultural relativism, and so for moral relativism.

There are several things wrong with this. First of all, Mead's own "discoveries" in Sâmoa were profoundly flawed. What Sâmoans have always known is that Mead was deceived by her teasing adolescent informants and failed to perceive that female chastity was actually highly prized in Sâmoa and that there was very little of anything like "casual sex" going on there in Mead's day. Even in her book there are strange aspects, as when Mead characterizes a certain kind of casual sex as "clandestine rape." That has an odd ring -- until we discover that it really is a kind of rape, not a kind of casual sex. It also turns out that Sâmoan culture is rather far from being non-violent or easygoing [note]. The anthropological world has had a tough time coming to grips with this, because of Mead's prestige and because of the weight of ideological conclusions that has rested on it; but the whole story is now out in a book, Margaret Mead and Samoa, by an anthropologist from New Zealand named Derek Freeman. Now, there actually are other Polynesian cultures, such as in Tahiti, where attitudes about sex seem to be rather freer than they are in Sâmoa or even in the United States [note]. So it might be possible to reargue Mead's case with different data. But the point of this episode is that it shows us how easy it is for an anthropologist with ideological presuppositions to see what they want to see. This kind of "scientific evidence" is a slippery thing and it is too easy to draw the kinds of sweeping conclusions that were drawn about it. If an anthropological study is going to prove a fundamental point about the nature of value, we must be careful about what the point is supposed to be and how such a thing can be supported by evidence.

The great problem with the logic of something like Mead's "discoveries" is that even if we accept that cultures can have some very different values, this still doesn't prove culture relativism: for while cultural relativism must say that all values are relative to a particular culture, a cultural absolutism merely needs to deny that, saying that not all values are relative to a particular culture, i.e. that some values are cultural universals. Thus, Margaret Mead could have visited a hundred Sâmoas and found all kinds of values that were different; but if there is even one value that is common to all those cultures, cultural relativism is refuted. That would be a matter for an empirical study too, although a much more arduous one.

But the deepest problem with cultural relativism and its anthropological vindication, whether by Mead or others, comes when we consider what it is supposed to be. As a methodological principle for anthropology, we might even say that cultural relativism is unobjectionable: anthropologists are basically supposed to describe what a culture is like, and it really doesn't fit in with that purpose to spend any time judging the culture or trying to change it. Those jobs can be left to other people. The anthropologist just does the description and then moves on to the next culture, all for the sake of scientific knowledge. Unfortunately, it is not always possible for an anthropologist to be so detached. Even in Coming of Age in Samoa, Mead clearly means to give us the impression that easygoing Sâmoan ways are better than those of her own culture (or ours). Since, as it turns out, Sâmoan culture wasn't that way after all, we end up with Mead in the curious position of making her own a priori claim about what kinds of cultural values in general are valuable, regardless of who might have them. She didn't just see what she wanted to see, but she saw the better world that she wanted to see. More importantly, cultural relativism, as many anthropologists end up talking about it, gets raised from a methodological principle for a scientific discipline into a moral principle that is supposed to apply to everyone: That since all values are specific to a given culture, then nobody has the right to impose the values from their culture on to any other culture or to tell any culture that their traditional values should be different.

However, with such a moral principle, we have the familiar problem of self-referential consistency: for as a moral value from what culture does cultural relativism come? And as a way of telling people how to treat cultures, does cultural relativism actually impose alien values on traditional cultures? The answer to the first question, of course, is that cultural relativism is initially the value of American and European anthropologists, or Western cultural relativists in general. The answer to the second question is that virtually no traditional cultures have anything like a sense of cultural relativism. The ancient Egyptians referred to their neighbors with unfriendly epithets like "accursed sand-farers" and "wretched Asiatics." In the objects from Tutankhamon's tomb, we can see the king slaughtering various enemies of Egypt, African and Asiatic. The Greeks actually gave us the word "barbarians," which was freely used by the Romans and which we use to translate comparable terms in Chinese, Japanese, etc. Traditional cultures tend to regard themselves as "the people," the "real people," or the "human beings," while everyone else is wicked, miserable, treacherous, sub-human, etc [note].

The result of this is that if we want to establish a moral principle to respect the values of other cultures, we cannot do so on the basis of cultural relativism; for our own principle would then mean that we cannot respect all the values of other cultures. There are going to be exceptions; and it actually isn't too difficult to make a list of other exceptions we might like to make: slavery, human sacrifice, torture, infanticide, female circumcision, and other bodily mutilations of children or criminals. Those are the easy ones. But once given those things, the task before us is clearly a more difficult and sobering one than what we contemplated through the easy out of cultural relativism. On the other hand, we might try to save cultural relativism by denying that it is a moral principle. Of course, if so, nobody would care about it, and there wouldn't be anything wrong with one culture conquering and exterminating another, especially since that has actually been the traditional practice of countless cultures during the ages. Instead, a principle of cultural relativism never enters public debate without it being used as a moral principle to forbid someone from altering or even from criticizing some or all the values of specific cultures. As a practical matter, then it is meaningless to try and save cultural relativism by erasing the moral content that is usually claimed for it.

Cognitive relativisms, of course, will always imply some kind of moral or cultural relativism. Historicism always does that, and, for linguistic relativism, Wittgenstein actually provides us with a nice term for relative systems of value: "forms of life." The hard part is when we then ask if Hitler and Stalin simply had their own "forms of life," which were different from but not better or worse, than ours. Only an ideologue, infatuated with relativism, would answer, "yes." But if we answer "yes," there is, of course, nothing wrong with us defeating and killing Hitler or Stalin. But neither would there be anything wrong with them defeating and killing us. We would have no moral right to try and stop them, but then they would have no moral right to complain about us trying to stop them -- except in terms of their own "form of life," which we don't have to care about. On the other hand, people who talk about "forms of life," and who even might answer "yes" to this kind of question, inevitably make the same move as Protagoras and try to start claiming that their "form of life" is "better" than Hitler's, or ours. So the whole cycle of paradox begins again.

The problem with recognizing the self-contradictory and self-defeating character of relativism is that it does remove the easy out. We may know thereby that there are absolute and objective truths and values, but this doesn't tell us what they are, how they exist, or how we can know them. In our day, it often seems that we are still not one iota closer to having the answers to those questions. Thus, the burden of proof in the history of philosophy is to provide those answers for any claims that might be made in matters of fact or value. Socrates and Plato got off to a good start, but the defects in Plato's theory, misunderstood by his student Aristotle, immediately tangled up the issues in a way that still has never been properly untangled. Most philosophers would probably say today that there has been progress in understanding all these issues, but then the embarrassment is that they mostly would not agree about just in what the progress consists. The relativists still think that progress is to return to what Protagoras thought in the first place. What they really want is that easy out, so as not to need to face the awesome task of justifying or discovering the true nature of being and value.

A Death in the Rainforest, by Don Kulick, Algonquin Books of Chapel Hill, 2019

The Fallacy of Moralistic Relativism

Ethics

History of Philosophy

Home Page

Copyright (c) 1996, 1998, 1999, 2000, 2008, 2012, 2016, 2018, 2021 Kelley L. Ross, Ph.D. All Rights Reserved

Relativism, Note 1

Protagoras, for his part, admitting as he does that everybody's opinion is true, must acknowledge the truth of his opponents' belief about his own belief, where they think he is wrong.

Theaetetus 171a. F.M. Cornford translation.

Return to text

Relativism, Note 2

Russell was originally trying to resolve paradoxes of self-reference in Set Theory. It was just a happy added benefit for Russell that his theory could be used to save Relativism. But the consensus now is that Set Theory is better off without the Theory of Types.

Return to text

Relativism, Note 3

On a more technical level, there is the question of which "type" the Theory of Types itself belongs to. Each "type" of expression can only refer to the next lower order type of thing (and never to itself), but the Theory of Types obviously refers to all types, and this violates the fundamental principle of the theory.

Return to text

Relativism, Note 4

But notice that Marx and Marxists must fall into a paradox of self-referential consistency anyway: there may be a cognitively absolute standpoint for knowledge, but Marx is not in it. Marx's own consciousness did not depend on a communist, proletarian mode of production; so he cannot really claim to be producing absolute knowledge, much as he would like to. Marx's own "mode of production" was actually to sponge off his relatives and friends, including his friend Engels, who derived his money from the family business--a factory: Engels was himself a capitalist.

Return to text

Relativism, Note 5;
The Whorfian Hypothesis

Benjamin Lee Whorf was fascinated by the grammatical peculiarities of isolated languages like that of the Hopi people of the American Southwest. However, John McWhorter has pointed out that difficult and elaborate systems of both grammar and phonology are indeed characteristic of isolated languages [cf. The Power of Babel, A Natural History of Language, Perennial, 2003]. When languages are in contact with other languages, and especially when a language is often learned by adult speakers of other languages, the grammar and phonology both tend to simplify. The peculiarities melt away, and we end up looking at many things that would qualify as Chomsky's linguistic universals.

Thus, trade languages like Malay, which are largely spoken by adults as second languages, have some of the simplest grammars and easiest pronunciations of any languages. English itself has experienced simplification from its Germanic origins and even in comparison to the overlay of French introduced by the Norman conquest. This simplification began with the settlement of Danes and Norwegians in England, who, although their languages were related and comparable to English, nevertheless were learning the new language as adults. English ends up with a grammar where the nouns act as they do in French, while the verbs act as they do in German. This avoids the complexity of French verbs and German nouns, both of which are highly inflected. People learning English may complain about the spelling or the idioms, but they don't need to deal with the challenges of Chinese pronunciation or Russian grammar.

Whorf's thesis was that the structure of language imposes a map or paradigm on thought, which the speaker cannot avoid

We cut nature up, organize it into concepts, and ascribe significances as we do, largely because we are parties to an agreement to organize it in this way -- an agreement that holds throughout our speech community and is codified in the patterns of our language. The agreement is, of course, an implicit and unstated one, but its terms are absolutely obligatory; we cannot talk at all except by subscribing to the organization and classification of data which the agreement decrees. [Language, Thought, and Reality: Selected Writings of Benjamin Lee Whorf, edited by John B. Carroll, MIT Press, 1956, pp.213-214, emphasis in original; quoted by John McWhorter, Our Magnificent Bastard Tongue, Avery, 2009, p.143]

This is refuted by a couple of simple considerations. This imposition would come either from the vocabulary or from the grammar of the language. I heard someone a few years ago, a Chinese native speaker of Chinese, who claimed that, because "White House" in Chinese actually means "White Palace" (), the Chinese language could not express the values of a Constitutional Republic (which doesn't put the President in a palace). However, the Chinese language, which contains other words for domicile, comparable to "house" (, , ), "home" (), or "mansion" () in English, did not force anyone to translate "White House" as "White Palace." That reflected the intention of the translator, not of the language. And it doesn't make any difference anyway. A Presidential "palace" can and does exist in a republic -- like Italy, where the official residence of the President of the Republic is still the Quirinal Palace, where the Popes and Kings had previously lived.

In any case, if a person's expression is limited by the available vocabulary of their language, there are three things they can do:  (1) coin a new word, defined with the required meaning; (2) borrow a word from another language which already has the meaning wanted; and (3) use an existing word but give it a new, and perhaps related, meaning. These actions can be combined. For instance, lens in Latin means "lentil" (from lentilis, the adjectival version). This has been borrowed from Latin, not to mean a kind of bean but to mean a piece of glass, shapped a bit like a lentil, that has properties of focusing light. In modern usage, the origin is entirely forgotten.

Nota bene:  Each of these actions by itself refutes the thesis that langauge determines thought, for each of them is a manipulation of language to express thoughts that, in the judgment of the speaker, could not previously be expressed save through awkward periphrasis. Thus, you have a new thought or idea, and you look for a way to express it, first by explanation, second by terminology. Either of those steps can take a while, as you look for the best way to express in language what starts in the mind.

This also refutes the assertion of S.I. Hayakawa, as a fundamental principle of Semantics, that, "The way you talk determines the way you think." No. In these cases, the way you talk is determined by the way you think and the devices you must use, in the face of poor available vocabulary, to express the thought.

The more serious and plausible side of Whorf's thesis is that the grammar of a language is what imposes the map or paradigm on thought. Here languages like Hopi provide some support, with grammatical inflections that specify all sorts of characteristics of objects (e.g., "long," "short," "sharp," etc.) that we expect that we would often rather not bother with. The remedy for this is a simple one:  Break the rules. Grammar isn't like gravity, and someone speaking Hopi will not fall to their death if they skip some of the inflections required by customary grammar. So when Whorf said of grammar, "its terms are absolutely obligatory, this is quite false. It may be "obligatory" for statements if they are to be grammatical, but then nothing stops anyone from speaking ungrammatically.

As it happens, the grammar of languages tends to change and simplify, not when speakers are deliberately breaking the rules, but when they can't help it -- either because they are not native speakers (like the Vikings in England), who cannot remember or get all the rules right, or, more importantly, because they are young. A more complicated grammar is more difficult to learn, and we know how even the relatively simple grammar of English is often mangled by children, or teenagers -- the new, trendy expressions of teenagers may be subversive of traditional grammar.

Thus, the impression that we get from Whorf or Hayakawa, that people are imprisoned by their grammar, is not true. Grammar is fragile, not strong, and in every generation, or with every influx of new but adult speakers, its strength is sorely tested. The more complex and inflexible systems will break down; and indeed we only find them in languages that have been isolated for centuries -- especially American Indian languages, or those of the Caucasus -- situations that also tend to go with very conserative cultures, where there is little call for, or evidence of, much in the way of new ideas. Also, since the rules of grammar are often ambiguous in their application, there is room for variation even without quite breaking them. In turn, however, competent adult speakers can explicitly break the grammatical rules, at need, just as easily as they can coin, borrow, or change words and their meanings.

But guns he had seen, in the hands of men on Mars, and the expression of Jill's face at having one aimed at her he did not like. He grokked that this was one of the critical cusps in the growth of a being wherein contemplation must bring forth right action in order to permit further growth. He acted.

Michael Valentine Smith, Stranger in a Strange Land, by Robert Heinlein [1961, A Berkley Medallion Book, 1968, 1973, p.69]



λέγων· ὅτι πάροικός ἐιμι ἐν γῇ ἀλλοτρίᾳ.
Dicens, advena fui in terra aliena.
[Moses] said, I have been a stranger in a strange land.

Exodus 2:22

The Whorfian Hypothesis in Stranger in a Strange Land and in Arrival

In the novel Stranger in a Strange Land [1961] by Robert Heinlein and in the movie Arrival [2016, directed by Denis Villeneuve and written by Eric Heisserer], the stories depend on the Whorfian Hypothesis about language being true. In both of them, simply learning a new language enables characters to manipulate the world, and apparently suspend laws of nature, in ways not possible to them previously.

In Heinlein's book, Michael Valentine Smith, who was orphaned on Mars when his astronaut parents died (or were murdered) there, was raised by Martians and then later was returned to Earth by a subsequent expedition. He does not know that human beings, without the benefit of the Martian language, do not experience reality in the same way that he does and that they lack abilities that he takes for granted. Thus, levitation and control of ambient conditions are things that he does not find remarkable or in need of explanation. Most dramatically, if he perceives or "groks" (glossed as "to taste," like Latin sapio, "to taste" or "know") "wrongness" in anything, including people, he can, remotely, tip them out of our universe of three dimensional space. They disappear.

When Smith realizes that humans cannot do these things, he cannot explain how he is able to do them without teaching his human friends the Martian language, which he begins to do. They are then able to perform similar feats. Heinlein, of course, cannot explain what it is about the Martian language that makes interaction with the physical world so different. Ex hypothese, he could not. Eventually, Smith turns his language instruction into a religion (like Heinlein's science fiction colleague L. Ron Hubbard?) and allows himself to be martyred to the faith. It is not clarified whether the circumstance that his spirit survives death is also due to the Martain language or is just true in general, as it appears to be.

The film Arrival, based on the science fiction story "Story of Your Life," by Ted Chiang, stars Amy Adams, Jeremy Renner, and Forest Whitaker. Renner was the star of The Hurt Locker, which won the Academy Award for Best Picture of 2008. Forest Whitaker was the memorable modern samurai hit man in Ghost Dog [1999]. Amy Adams has been in various winning movie roles, including the delightful foodie movie, Julie & Julia [2009].

In Arrival Adams plays a linguistics professor who is called into action, by Army Colonel Forest Whitaker, to try and communicate with alien space ships, twelve of them, that have arrived at Earth. Teams from various nations on Earth are able to enter the ships and interview the aliens, who appear to produce speech, but no one can establish any actual communication.

Whitaker plays some of the alien sounds for Adams, apparently thinking that because she speaks Persian (which the movie calls ) and Chinese, she's going to be able to understand them. She tells him that she can't do that without directly addressing the aliens. Of course, that can be an element of it; but learning or deciphering any language requires much more in the way of examples of it than can be gotten quickly even in an actual interaction, let alone from a single brief tape of noises. Whitaker's reaction is pretty much "you're no fun," and he initially withdraws his offer, which is silly, since nobody can do what he seems to want. The movie also creates the deceptive impression that everyone in linguistics is a polyglot, and the single reference to Sanskrit by Adams leads reviewers to conclude that she also knows Sanskrit. But few people in linguistics are actual polyglots, although my own Persian professor from UCLA, Donald Stilo, now retired from the Max Planck Insitute in Germany, is one. Writing a dissertation on Persian in linguistics, Stilo was also designing a course in Persian, and was teaching it. I recently discovered that Stilo's name came from a historically significant city in Calabria.

Adams, accompanied by physicist Jeremy Renner, discovers that alien "speech" is actually a kind of writing, with the aliens squirking ink into the air that forms into largely circular glyphs, which look a lot like just inkblots. The form of the glyphs contains semantic elements that express, all at once, entire sentences. Adams is eventually able to analyse and identify these elements, reproduce them (digitally!), and engage in dialogues, although often with confusing and even dangerous ambiguities.

Since the glyphs contain whole sentences all at once, it is noted in the movie that they are free of time. And this is what leads into a Whorfian metaphysics -- with the explicit identification of the "Sapir-Whorf" theory in the movie. As it happens, in the metaphysics of the alien language, time does not exist in the strictly linear fashion known to us. As Adams begins to learn the language, she also begins to experience episodes of seeing things that are not in the present. Actually, she knows so little of the language at that point, and is so intent on translating it into English, that it is not believable that she should already be experiencing reality is such a different way.

In Stranger in a Strange Land much is made of the circumstance that the word grok represents a concept that resists translation into human speech. Although, as we have seen, simple glosses are possible for it, the word tends to be used in English unchanged by those learning the Martian language. We see nothing of the sort in Arrival, although there easily could have been a sequence where Adams points to part of a glyph and voices some perplexity about what it could mean. But perhaps there is something like that, since she and Renner keep seeing a glyph they translate as "timeless." Of course, the amount of time this all would take is not something easily accommodated in a movie, so the rapid transformation of Adams' world can be excused as poetic license.

No more than in Heinlein can Arrival explain how this alien language alters the apparent structure of reality, to the point where impossible things can happen. But the consequences of his are very different in Arrival than in Stranger in a Strange Land. Michael Valentine Smith has some kind of super-powers, but the effect on the character played by Amy Adams is of a much more personal form.

At the beginning of the movie we see what we take to be recollections by Adams of her young daughter, who subsequently had died of what looks like cancer. We don't see a husband, and at the beginning of the main story she is quite alone and unattached to anyone. As the time displacement experiences begin to occur, most of them also involve her daughter. Still no husband, but we learn in one of them that the girl's father has left because Adams has told him about the disease that will take their daughter's life. This kind of thing happens with real people.

A large part of the dramatic payoff of the movie is when we learn that all these recollections of the daughter are actually in the future. The father of the girl is Renner, whom Adams has only met on the alien translation project. He later leaves because Adams has told him, long before the illness occurs, that their daughter is doomed and fated to die. He can't handle it -- although we wonder, since he has been learning the alien language himself, why he has not acquired a similiar familiarity with the future.

Thus, the much of the interest of the movie is in the moral issues of the fatalism implied by its metaphysics, which in turn is allowed by the Whorfian Hypothesis, which thus taken to strongly imply that a real metaphysics can be anything that a language in some sense "wants" it to be. The producers of the movie seem to be aware of these overtones. Thus, as Adams and Renner become intimate, and he voices a desire for a child, she asks him if there is anything about his life that he would change. He answers, "No," perhaps not realizing that she means all of his life, including the future, even the tragic death of their child, which presumably could have been avoided by not having that child at that time. In Dune [Frank Herbert, 1965], young Paul Atreides (Παῦλος Ἀτρείδης -- pardon the Greek) begins to see the future (because of a drug, not a language), but he also sees different futures. It turns out that the future he would rather avoid, a Cosmic Jihad, is unavoidable if he does what he must do, which is liberate Dune from the evil Baron Harkonnen. Liberation, Jihad. No liberation, no Jihad. This is a dilemma, but it is not fatalistic in the same way that we find in Arrival. The future in Dune is a problem of navigation, not of fatalistic acceptance.

But it is hard to be consistent about this. The aliens tell Adams that they are teaching humans their language because in 3000 years they will need our help. So, because of this, the aliens have chosen to come to Earth? But then, knowing the tragedy of her daughter's life, Adams choses to have the baby anyway? Or are there really no true choices involved? The aliens are compelled to come to Earth, as Adams is compelled to have the baby.

This is a paradox of free will and foreknowledge. If we have free will, and we know the future, then we can change the choices to produce a better future. This is what we try to do anyway, when all we have is speculation about the future and imperfect knowledge about the possible consquences of our actions. But if our knowledge of the future were perfect, has the future in a sense already happened, which means our choices have already been made for us? Elsewhere I have recently considered a physicist who stumbles into such a mess. This has long been a problem in theology. When God creates the world, he already knows that Adolf Hitler will murder millions of people. So why does he allow this person even to be conceived? If Hitler simply never exists, he will know nothing about it; and all the suffering he would have inflicted will not happen. And if somehow Hitler must exist, aren't there going to be an infinite number of other possible individuals who simply won't exist themselves? What have we, or God, got against them, especially if most will be better people than Adolf Hitler?

Thus, by way of the Whorfian Hypothesis, Arrival opens up a fountain of metaphysical, moral, and theological problems. Is it good that Amy Adams values her daughter's short life? Or is it wrongful that she allows her daughter to exist and live a life that ends in intense suffering and tragedy? Sometimes there are dilemmas like this in the real world, when parents discover that they will or may have a child with grave birth defects or hereditary disease. Thus, we find that the Whorfian Hypothesis can lead into a tangle of larger issues, which may already exist without it, but which can appear for us if all we are doing is talking about the meaning of language.

The outright fatalism of Arrival might be compared with the confusion between determinism and teleology in the movie Knowing. There is no confusion in Arrival, just the consequences of the Whorfian Hypothesis; but both movies are, after a fashion, meditations on the meaning of life.

Return to text

The Linguistic Turn

Is That a Fish in Your Ear?, Translation and the Meaning of Everything, by David Bellos, Farrar, Straut and Giroux, 2011, 2012

On Hollywood

Reviews

Philosophy of Science, Linguistics

Relativism, Note 6

Although, outside of Sâmoa, Sâmoans themselves don't always like to admit this. On a personal note, the first thing I ever heard about Sâmoan behavior was from my first wife, who was Part Hawaiian (Hapa Haole) and had lived all her life in Hawaii. Once she happened to mention that Sâmoans in Honolulu had a reputation for violence--e.g. beating up sailors with baseball bats. Years later I saw some Sâmoans on television in Los Angeles, after an incident with the Los Angeles County Sheriff's Department, complaining about the "stereotype" of Sâmoans being violent, when I had never heard any such thing in Los Angeles. I suspect that most Angelenos would be surprised even to know that Sâmoans lived among them, much less have any ideas about what they are like. I only knew the "stereotype" because of my life in Hawaii.

While the crime rate in Sâmoa is a matter for police records, this all seems a matter of one stereotype against another: of Polynesia as a peaceful place of love, beaches, and hulas, as against harsher versions. The reality certainly was harsher: All of Polynesia was ruled by a warrior nobility, the ali'i in Hawai'i, ariki in New Zealand, etc. In Hawai'i some chiefs were so sacred (kapu) that commoners could be killed just for looking at them. War was familiar, though only the introduction of firearms made it possible for someone like Kamehameha I to actually unify so extensive a domain as Hawai'i: the extraordinary final battle of which was Kamehameha driving the army of the King of O'ahu over the spectacular cliff of the Nu'uanu Pali. So no one should be surprised, or ashamed either, that such a heritage could produce a certain ferocity even now, whether in Sâmoa or elsewhere. As the title of a recent movie about the Mâori of New Zealand puts it: Once Were Warriors

Return to text

Relativism, Note 7

The details of sex in Tahiti can be gathered from Robert I. Levy, Tahitians, Mind and Experience in the Society Islands [University of Chicago Press, 1973]. There are also, of course, the famous stories of Hawaiian girls swimming out naked to Captain Cook's ship, or to the later whalers, willing to bestow their charms for as little in return as an iron nail. Captain Cook began posting guards to repel such tender boarders, both out of concern for spreading venereal disease among them and out of worry that the ship might fall apart from all the extracted nails.

With so much free love, we might wonder, how did the inevitable children get supported? And didn't the Polynesians have any concern about parentage? Well, the whole picture may not add up to anything as free, open, and irresponsible as it might seem at first. For one thing, there was a considerable difference between commoners and the nobility (the ali'i in Hawai'i, ari'i in Tahiti, ariki in New Zealand, etc.). The nobility definitely were very concerned about parentage, since their status depended on their genealogies, which were remembered and chanted in care and detail. It is unlikely that there were any naked ali'i girls swimming out to the sailors.

In the second place, there are reports from various parts of the Pacific that an out of wedlock child, as evidence of fertility and health, enhanced a girl's marriage prospects. A girl only began to be considered "loose" if she had more than one premarital child. At a time when people did not live long, and it was common for women to die in childbirth, it is reasonable to suppose that marriageable girls would really not have much time for extra premarital pregnancies, and that few would want to risk continued pregnancies without the social connection that marriage would bestow.

At the same time, the care and status of any extramarital, or even marital, children was assured for other reasons. If Hawai'i is at all representative of the rest of Polynesia and the Pacific, then the institution of adoption or fosterage was fully capable of absorbing any children, premarital or otherwise, that a woman might not want to raise herself. In Hawaiian, "hânai" means (as a verb) "to raise, feed, nourish, etc." and (as a noun or adjective) "foster/adopted child." There hardly seems to be a difference between hânai fosterage and adoption, since the children were usually fully informed and aware of their natural parents, and reckoned their descent from them. Thus, Queen Lili'uokalani (1838-1917) was not raised by her natural parents but knew who they were and was fully conscious of her royal descent. Thus, there was no shame or secrecy about adoption, and any inconvenience occasioned by out of wedlock birth could be accommodated without stigma or disruption.

While it is tempting to praise these arrangements as humane and sensible, which they certainly seem to be, the viability of the institution really depended on a couple of factors that may no longer be possible:  One was the absence, as far as we can tell, of venereal disease. Today, much extra-marital sex runs the risk, not only of catching and passing fatal disease, but of courting sterility through less serious, but nevertheless damaging, infections. Also, the ease of hânai adoption depended on the casualness with which children could be circulated -- implying too a reciprocity among people who basically all knew each other. This becomes emotionally and legally rather more difficult in a larger, more impersonal, and legalistic society. Nevertheless, we might say that the modern prudent use of birth control, which limits unwanted pregnancies, and the restrained and prudent conduct of a small number of premarital sexual relationships, with an eye to avoiding disease, now has tended to reproduce the more restrained version of Polynesian sexual activity, rather more restricted than Meade's Sâmoa, but somewhat more open than the actual Sâmoa (where a victim of "clandestine rape" could only preserve her prospects in life by marrying the rapist).

All these considerations, of course, speak rather more for the universality of human nature, which adapts to circumstances, than for cultural relativism.

Return to "Relativism"

Return to "Gender Stereogypes and Sexual Archetypes"

Relativism, Note 8

The German word for "German" is Deutsch, which meant "of the people" and is related to theoda in Old English, to "Dutch" in Modern English, and to another Roman word for Germans, "Teutons." In the movie Little Big Man, considerable humor is derived from Chief Dan George speaking of his own people as the "human beings" and of others being adopted into the tribe as "becoming human beings." Islâm traditionally divides the whole world into the Dâru l'Islâm, "the House of Islâm," and the Dâru lH.arb, "the House of War," which means the realm of everybody else, where Islâm is ready to carry on the holy war (Jihâd). Every single one of these peoples and traditions regarded their ways and their values as best and everyone else's as deficient or terrible.

Return to text