Published on June 5, 2025 4:32 PM GMT
N.B. This is a chapter in a book about truth and knowledge. It's a major revision to the version of this chapter in the first draft, so I'm publishing it again to get feedback on the completely new text, and have replaced the original version of the chapter with this one in the sequence. You can find more info about the book on its website.
Like many people, I have a hard time learning foreign languages. I've tried dozens of apps, books, and courses to learn everything from Mandarin Chinese to Classical Greek, all with little success. The closest I ever came to really learning another language was when I took French in high school, but I was so far from fluency that, after three years of intense study, I was lucky to receive a passing grade.
I started out with high hopes that learning French would be easy. All I had to do, it seemed, was memorize the mapping of French words to English ones. If I could remember that "le cheval" means horse and "la voiture" means car and so on, I'd be able to speak French fluently.
Unfortunately, it didn't work. I couldn't map between French and English fast enough to do anything more than read and write French very slowly. My teacher said I needed to learn to think in French and stop trying to translate everything in my head, but I didn't want to believe him. I was convinced that my way should work. What finally changed my mind was discovering that some English words lack a French equivalent.
Consider the word "mug". In English it encompasses many different types of cups made of different materials, from glass beer mugs to clay coffee mugs to insulated metal travel mugs and more. But in French a beer mug is "une chope", a coffee mug is "une tasse à café", and a travel mug might be "un thermos" or "un gobelet de voyage". Each one gets its own name, and there is no one word like "mug" that groups them all together. The closest translations of "mug" are the loanword "le mug"—which usually means a coffee mug—and the descriptive phrase "les tasses à anse"—cups with handles—but neither option succeeds in capturing all the implicit meaning carried by "mug" in English. There's simply no word in French that means exactly what "mug" does.
And "mug" is just one example. English has hundreds of words like "cozy" and "snack" that lack native equivalents in French, and thousands more where the meaning of the straightforward translation is close but subtly different. The reverse is also true: French is filled with words that are hard to translate directly into English, like "flâner", which means to wander aimlessly for pleasure, and "le débrouillard", meaning a resourceful person who can figure things out on their own. Obviously these French words can be explained in translation—I just did!—but no English words concisely capture their nuance.
What's more, every pair of languages is like this. No two see the world in quite the same way. Which is odd, when you think about it, because every language is trying to describe the same reality. It seems like they should all have matching words to mean exactly the same things. And yet, they don't. Instead they are all full of words with slightly different meanings that make perfect translations impossible.
Translation problems also exist among speakers of the same language. For example, English speakers of different dialects sometimes confuse each other by giving the same word different meanings, like how "pants" are outerwear to Americans but underwear to the British. Or by using different words to describe the same thing, like how "soda", "pop", and "coke" are all words for carbonated soft drinks. And, in a demonstration of cultural convergence, speakers of different English dialects have independently invented words for unsophisticated people that mean almost, but not quite, the same thing: "redneck" in American, "bogan" in Australian, and "hoser" in Canadian.
Even a single speaker will sometimes use the same word to mean different things. Consider the word "hot". It can mean that water is boiled, food is spicy, a person is attractive, or a song is popular. In each case, "hot" means roughly "high intensity or energy", but the exact meaning is contextual: it depends on the thing being described in non-obvious and ambiguous ways. People regularly have to ask for clarification when someone says "this food is hot" to know if it's "hot hot" or "spicy hot", and flirty comments hinge on the ambiguity of whether a person is "hot" because they're attractive or because they're sweating from their time on the dance floor.
That words don't have consistent meanings, both within and between languages and speakers, poses a serious challenge to knowing the truth. Words are more than a means of communication—they're also how we represent claims and reckon if those claims are true. If the meaning of a word is uncertain, then the meaning of claims made using that word will be uncertain, and if the meaning of a claim is uncertain, then we can't be certain about that claim's truth.
Suppose I said "I'm tall". Is it true? I last measured in at 5 feet, 10 inches. That makes me taller than the global average male height of 5 feet, 7 inches and taller than 80% of all people in the world. But many people think that "tall" starts at 6 feet or higher, and I don't need to buy tall size clothes for them to fit. So am I "tall"?
I'm not sure. I don't think of myself as "tall" because I know plenty of people taller than me, but occasionally someone will refer to me as "tall" because, to them, I am. Short of agreeing on a specific criterion for measuring where "tall" starts, how we resolve the claim "I'm tall" depends on what the word "tall" is interpreted to mean.
Similarly, all claims hinge on how we interpret the words used to make them. As a result, we can end up with endless debates where people talk past each other, saying the same words, but intending different meanings. You need only spend a few minutes reading the news to find dozens of disagreements that persist because a single word like "fair" or "rational" or "progress" means different things to different people. With so many different ways to understand the same words, it's a wonder we manage to communicate at all.
And yet, communicate we do, and well enough to make scientific discoveries and cure diseases and go to space and, yes, sometimes even know what's true. How do we do it? How do we give words sufficiently clear meaning that we can understand one another, get things done, and know the truth? The answer lies in how we use words and the way they get their meanings.
Using Words
We use words all the time every day to navigate our lives. We use them to do our jobs, to run errands, to build relationships, and even to read books like this one. We use them so much that they're almost as invisible as the air we breathe. But they aren't actually invisible, and, if we look closely, we can get a sense of how they function.
Suppose you told me that you saw a "tree". Without any additional information, I can make the likely-true assumption that you saw a tall, woody plant with a trunk, branches, and leaves. I can also assume that you didn't see a cactus, because we don't conventionally consider cactuses to be trees, and that you probably saw some dirt, because trees usually grow in dirt.
How was I able to make all these assumptions? By drawing inferences about what you saw based on you saying that you saw a "tree". I know that "tree" points to a category of things in the world and that things in that category fit a pattern. I then inferred details about what you saw based on that pattern and my knowledge of and experiences with other things called "tree".
If you had instead said that you saw a "yellow" flower or a man "running", I would have been able to use the same process of inference as I did for "tree" to make sense of "yellow" and "running" and every other word you said. Even if you had said something more abstract, like that you had a "rough day", I would have been able to make sense of the metaphor of "rough day" by extrapolating from my knowledge of things with a "rough" texture, like tree bark, and inferring what that must mean about the nature of your day.
Unfortunately, inferential reasoning is error prone. Perhaps the tree you saw was a palm tree, but upon telling me that you saw a "tree", I imagined that you saw something more akin to an oak or elm. I might have assumed that it was a big, lush tree with twisting branches, and would have been confused if you told me that the tree didn't provide you with much shade. My inferences would have been reasonable given my limited information, but ultimately wrong about what you saw.
My mistake was possible because the set of things we call "tree" contains a lot of variety, and, thanks to science, we know why. Trees don't form a single phylogenetic grouping. Instead, similar features have evolved multiple times in different plant lineages. Just like bats and birds both have wings but are only distantly related, so too with trees and bark. We came up with the word "tree" before we knew any of this, though, so it continues to reflect our pre-scientific understanding of the world.
Maybe we could replace "tree" with more scientifically accurate terms. We've certainly done that for other words. We used to call whales "fish". We used to call fungi "plants". We used to categorize diseases by "humours", have criteria for identifying "witches", and thought the empty expanses of the universe were filled with "aether". That we still call trees "trees" is perhaps simply a failure to carry the project to rationalizing words to match our scientific understanding far enough, and if we did carry it far enough, we could eliminate inferential errors about "tree" and all other words.
But consider the challenge of rationalizing a word like "cold". My wife thinks it's "cold" whenever it gets below 71 degrees inside our house. I think that's a comfortable temperature and only think it's "cold" when it's under 68. Who's right about when it's "cold"? Neither and both of us, because in this context, "cold" only has meaning relative to our personal perceptions. There's no fact of the matter about what counts as "cold", and even if we arbitrarily defined "cold" to mean any temperature below, say, 65 degrees, we'd have immediate need of a new word to mean "a personally uncomfortably low temperature" to give us back a word to mean what "cold" did.
So not every word can have a clear, objective definition, but maybe subjective words like "cold" are exceptions. Ignoring them, perhaps we can fix all our other words to have scientifically rigorous definitions. Many people have tried to do this, and all of them have failed. And although the ultimate reason they failed is something we'll have to return to in a later chapter, they also failed because science alone cannot tell us exactly where to draw the line between one thing and another.
Consider the story of Pluto. When astronomer Clyde Tombaugh discovered it in 1930, he identified it as the ninth planet in our solar system. But by the early 2000s, many celestial objects like Pluto were known, and it was no longer clear that Pluto should be classified as a planet.
Up to that point, the word "planet" had never been formally defined because it hadn't been necessary: everyone knew what a planet was. But now, faced with the possibility of a dozen or more new planets being named, it wasn't completely obvious which of them should qualify for planetary status. It thus fell to the International Astronomical Union to resolve the confusion.
They settled on a definition in 2006 that required planets meet three criteria:
- They orbit the Sun.They have sufficient mass for their self-gravity to overcome rigid body forces, resulting in a near-round shape.They have cleared their neighborhood of other objects.
Pluto meets the first two criteria, but not the third. A new category, "dwarf planet", was created to include Pluto and other Kuiper Belt objects like it.
Was the third criterion scientifically necessary to the definition of "planet"? Arguably, no. The IAU could have used only the first two criteria and had a more expansive definition that included Pluto among the planets. They also could have chosen among several other possible definitions of "planet" that relied on other criteria. All of their choices were equally consistent with observed facts. So how did they choose? Based on which definition was, in their opinion, most useful.
What, you might be asking yourself, does opinion and usefulness have to do with anything? Shouldn't the IAU define "planet" in a scientifically rigorous and objective way that accurately describes reality? Yes, and they did, but they had several equally accurate ways of defining "planet". Thus they had to make a choice between them based on something other than scientific accuracy, and they chose the one they thought would best balance clarity in everyday and scientific communication.
A similar, though less formal, process is going on all the time to define new words and evolve the definitions of existing ones. Every time a teenager creates new slang or a word is used in a metaphor that gives it new meaning, a choice is being made about how to accurately communicate useful ideas. It's a never ending process, and it's one that's fundamental to how words get their meaning.
Whence Meaning
One day you're reading a book and see the word "flotsam". You've never seen this word before and want to know what it means. What do you do?
You might try to guess from context. Sometimes this works, but let's assume that in this case it doesn't. You might then ask a friend, but they also don't know the word and can't help you. Finally, you look up the word in a dictionary, and find out it means "stuff floating or washed in by the sea, especially from a wrecked ship". That's clear enough, and now you understand what "flotsam" means.
Given how well looking up the definition of "flotsam" went, could you use a dictionary to learn what every word means? Early artificial intelligence researchers hoped so. They tried to create AI capable of thinking by teaching them language. Their approach was to digitize definitions to capture the relationships between words. A system might know that (orchid, is_a, flower), (orchid, has_part, stem), and (orchid, grows_on, tree), as well as that (flower, is_a, plant), (plant, is_a, living_thing), and so on. This gave them a word network, which they combined with rules encoding knowledge to create so-called expert systems in an attempt to automate human reasoning.
Medical expert systems, for example, were given information about thousands of symptoms and diagnoses, like that "fever" meant elevated body temperature, "fatigue" meant feeling tired, and a patient presenting with fever and fatigue might have the "flu". Yet if a doctor told the system "the patient is hot and tired after exercising outside", it would infer that the patient likely had the flu, failing to recognize that their condition was more likely to be a temporary effect from recent activity. Such misdiagnoses limited the usefulness of expert systems in medicine, and similar issues in other domains eventually led AI researchers to abandon expert systems in favor of more promising paths to building machine intelligence.
Why didn't expert systems work? In large part because they failed to solve the symbol grounding problem. The problem, first described by cognitive scientist Stevan Harnad, is that words are just meaningless symbols until they are connected to reality outside their definitions. But since early AI relied on intensional definitions—definitions that describe what words mean only in terms of other words—their understanding was trapped inside the word network, unable to accurately reason about the wider world, which led them to make mistakes humans easily avoid by applying common sense.
How is it that we humans don't suffer from the symbol grounding problem? Because we don't rely solely on intensional definitions. Instead we learn most words by observing examples of their use and then inferring what they mean, giving us what Ludwig Wittgenstein called ostensive definitions in his 1953 book Philosophical Investigations because they extend from what our experiences have shown us.
To see ostensive definitions in action, consider how a child learns the word "bird". They certainly don't look it up in a dictionary, and rarely does anyone give them an explicit definition. Instead, someone, usually a parent or teacher, points at a bird and says "look, a bird!". After that happens several times, the child develops an intuitive understanding of what a bird is, and will check that understanding by pointing at possible birds and asking "bird?". They eventually figure out that birds have feathers, wings, and beaks, lay eggs, and usually fly, all without having to be told any of that explicitly because they infer it from experience.
Similarly, most of our vocabulary is made up of words we learn ostensively, and those we learn from intensional definitions, like "flotsam", are grounded by familiarity with the ostensively defined words used in intensional definitions. This is true even of scientific and mathematically precise words, and the precision of our understanding of these words is limited by experiences. This is why new scientists and mathematicians often need years of training—not just to learn a broad set of knowledge and skills, but to develop the intuitions they need to understand the world in the precise terms of their field.
That word meanings ultimately come from our personal experience explains many features that at first seem odd about words, like why perfect translation between languages is impossible. Speakers of different languages have different experiences, and that leads them to find different words with different meanings useful. That's why English has a word for "mug" and French doesn't: it proved useful to English speakers, but not to French ones. The same goes for every other hard-to-translate word.
That words get their meanings from ostensive definitions also explains some of the limitations we face when trying to know the truth. To make a claim is to put it into words, and those words are defined, in your mind, in a way contingent on your experiences. When you share that claim with another person, they evaluate it based on their understanding of your words, which is based on their experiences. You both understand the claim to mean the same thing only to the extent that both of your experiences related to the meaning of those words are similar enough that you come to a shared understanding. If there is a large enough difference in your experiences, you'll likely find you each believe the claim has a different meaning.
This is why some truth claims are especially tricky to nail down: they make use of words that are hard to define. Few people squabble if you point to a squirrel and call it a "mammal". Many more may complain if you point to an action someone took and declare it to be "good" or "bad".
Those two words—"good" and "bad"—seem to defy precise definition. Everyone has a slightly different idea about what is good and bad, and these differences are the source of much strife. So as we continue to explore the fundamentally uncertain nature of truth, we'll next look closer at why moral agreement is so hard to come by.
Discuss