September 2007In high school I decided I was going to study philosophy in college.I had several motives, some more honorable than others. One of theless honorable was to shock people. College was regarded as jobtraining where I grew up, so studying philosophy seemed an impressivelyimpractical thing to do. Sort of like slashing holes in your clothesor putting a safety pin through your ear, which were other formsof impressive impracticality then just coming into fashion.But I had some more honest motives as well. I thought studyingphilosophy would be a shortcut straight to wisdom. All the peoplemajoring in other things would just end up with a bunch of domainknowledge. I would be learning what was really what.I'd tried to read a few philosophy books. Not recent ones; youwouldn't find those in our high school library. But I tried toread Plato and Aristotle. I doubt I believed I understood them,but they sounded like they were talking about something important.I assumed I'd learn what in college.The summer before senior year I took some college classes. I learneda lot in the calculus class, but I didn't learn much in Philosophy101. And yet my plan to study philosophy remained intact. It wasmy fault I hadn't learned anything. I hadn't read the books wewere assigned carefully enough. I'd give Berkeley's Principlesof Human Knowledge another shot in college. Anything so admiredand so difficult to read must have something in it, if one couldonly figure out what.Twenty-six years later, I still don't understand Berkeley. I havea nice edition of his collected works. Will I ever read it? Seemsunlikely.The difference between then and now is that now I understand whyBerkeley is probably not worth trying to understand. I think I seenow what went wrong with philosophy, and how we might fix it.WordsI did end up being a philosophy major for most of college. Itdidn't work out as I'd hoped. I didn't learn any magical truthscompared to which everything else was mere domain knowledge. ButI do at least know now why I didn't. Philosophy doesn't reallyhave a subject matter in the way math or history or most otheruniversity subjects do. There is no core of knowledge one mustmaster. The closest you come to that is a knowledge of what variousindividual philosophers have said about different topics over theyears. Few were sufficiently correct that people have forgottenwho discovered what they discovered.Formal logic has some subject matter. I took several classes inlogic. I don't know if I learned anything from them.[1]It does seem to me very important to be able to flip ideas around inone's head: to see when two ideas don't fully cover the space ofpossibilities, or when one idea is the same as another but with acouple things changed. But did studying logic teach me the importanceof thinking this way, or make me any better at it? I don't know.There are things I know I learned from studying philosophy. Themost dramatic I learned immediately, in the first semester offreshman year, in a class taught by Sydney Shoemaker. I learnedthat I don't exist. I am (and you are) a collection of cells thatlurches around driven by various forces, and calls itself I. Butthere's no central, indivisible thing that your identity goes with.You could conceivably lose half your brain and live. Which meansyour brain could conceivably be split into two halves and eachtransplanted into different bodies. Imagine waking up after suchan operation. You have to imagine being two people.The real lesson here is that the concepts we use in everyday lifeare fuzzy, and break down if pushed too hard. Even a concept asdear to us as I. It took me a while to grasp this, but when Idid it was fairly sudden, like someone in the nineteenth centurygrasping evolution and realizing the story of creation they'd beentold as a child was all wrong. [2]Outside of math there's a limitto how far you can push words; in fact, it would not be a baddefinition of math to call it the study of terms that have precisemeanings. Everyday words are inherently imprecise. They work wellenough in everyday life that you don't notice. Words seem to work,just as Newtonian physics seems to. But you can always make thembreak if you push them far enough.I would say that this has been, unfortunately for philosophy, thecentral fact of philosophy. Most philosophical debates are notmerely afflicted by but driven by confusions over words. Do wehave free will? Depends what you mean by "free." Do abstract ideasexist? Depends what you mean by "exist."Wittgenstein is popularly credited with the idea that most philosophicalcontroversies are due to confusions over language. I'm not surehow much credit to give him. I suspect a lot of people realizedthis, but reacted simply by not studying philosophy, rather thanbecoming philosophy professors.How did things get this way? Can something people have spentthousands of years studying really be a waste of time? Those areinteresting questions. In fact, some of the most interestingquestions you can ask about philosophy. The most valuable way toapproach the current philosophical tradition may be neither to getlost in pointless speculations like Berkeley, nor to shut them downlike Wittgenstein, but to study it as an example of reason gonewrong.HistoryWestern philosophy really begins with Socrates, Plato, and Aristotle.What we know of their predecessors comes from fragments and referencesin later works; their doctrines could be described as speculativecosmology that occasionally strays into analysis. Presumably theywere driven by whatever makes people in every other society inventcosmologies.[3]With Socrates, Plato, and particularly Aristotle, this traditionturned a corner. There started to be a lot more analysis. I suspectPlato and Aristotle were encouraged in this by progress in math.Mathematicians had by then shown that you could figure things outin a much more conclusive way than by making up fine sounding storiesabout them. [4]People talk so much about abstractions now that we don't realizewhat a leap it must have been when they first started to. It waspresumably many thousands of years between when people first starteddescribing things as hot or cold and when someone asked "what isheat?" No doubt it was a very gradual process. We don't know ifPlato or Aristotle were the first to ask any of the questions theydid. But their works are the oldest we have that do this on a largescale, and there is a freshness (not to say naivete) about themthat suggests some of the questions they asked were new to them,at least.Aristotle in particular reminds me of the phenomenon that happenswhen people discover something new, and are so excited by it thatthey race through a huge percentage of the newly discovered territoryin one lifetime. If so, that's evidence of how new this kind ofthinking was. [5]This is all to explain how Plato and Aristotle can be very impressiveand yet naive and mistaken. It was impressive even to ask thequestions they did. That doesn't mean they always came up withgood answers. It's not considered insulting to say that ancientGreek mathematicians were naive in some respects, or at least lackedsome concepts that would have made their lives easier. So I hopepeople will not be too offended if I propose that ancient philosopherswere similarly naive. In particular, they don't seem to have fullygrasped what I earlier called the central fact of philosophy: thatwords break if you push them too far."Much to the surprise of the builders of the first digital computers,"Rod Brooks wrote, "programs written for them usually did not work."[6]Something similar happened when people first started tryingto talk about abstractions. Much to their surprise, they didn'tarrive at answers they agreed upon. In fact, they rarely seemedto arrive at answers at all.They were in effect arguing about artifacts induced by sampling attoo low a resolution.The proof of how useless some of their answers turned out to be ishow little effect they have. No one after reading Aristotle'sMetaphysics does anything differently as a result.[7]Surely I'm not claiming that ideas have to have practical applicationsto be interesting? No, they may not have to. Hardy's boast thatnumber theory had no use whatsoever wouldn't disqualify it. Buthe turned out to be mistaken. In fact, it's suspiciously hard tofind a field of math that truly has no practical use. And Aristotle'sexplanation of the ultimate goal of philosophy in Book A of theMetaphysics implies that philosophy should be useful too.Theoretical KnowledgeAristotle's goal was to find the most general of general principles.The examples he gives are convincing: an ordinary worker buildsthings a certain way out of habit; a master craftsman can do morebecause he grasps the underlying principles. The trend is clear:the more general the knowledge, the more admirable it is. But thenhe makes a mistake—possibly the most important mistake in thehistory of philosophy. He has noticed that theoretical knowledgeis often acquired for its own sake, out of curiosity, rather thanfor any practical need. So he proposes there are two kinds oftheoretical knowledge: some that's useful in practical matters andsome that isn't. Since people interested in the latter are interestedin it for its own sake, it must be more noble. So he sets as hisgoal in the Metaphysics the exploration of knowledge that has nopractical use. Which means no alarms go off when he takes on grandbut vaguely understood questions and ends up getting lost in a seaof words.His mistake was to confuse motive and result. Certainly, peoplewho want a deep understanding of something are often driven bycuriosity rather than any practical need. But that doesn't meanwhat they end up learning is useless. It's very valuable in practiceto have a deep understanding of what you're doing; even if you'renever called on to solve advanced problems, you can see shortcutsin the solution of simple ones, and your knowledge won't break downin edge cases, as it would if you were relying on formulas youdidn't understand. Knowledge is power. That's what makes theoreticalknowledge prestigious. It's also what causes smart people to becurious about certain things and not others; our DNA is not sodisinterested as we might think.So while ideas don't have to have immediate practical applicationsto be interesting, the kinds of things we find interesting willsurprisingly often turn out to have practical applications.The reason Aristotle didn't get anywhere in the Metaphysics waspartly that he set off with contradictory aims: to explore the mostabstract ideas, guided by the assumption that they were useless.He was like an explorer looking for a territory to the north ofhim, starting with the assumption that it was located to the south.And since his work became the map used by generations of futureexplorers, he sent them off in the wrong direction as well. [8]Perhaps worst of all, he protected them from both the criticism ofoutsiders and the promptings of their own inner compass by establishingthe principle that the most noble sort of theoretical knowledge hadto be useless.The Metaphysics is mostly a failed experiment. A few ideas fromit turned out to be worth keeping; the bulk of it has had no effectat all. The Metaphysics is among the least read of all famousbooks. It's not hard to understand the way Newton's Principiais, but the way a garbled message is.Arguably it's an interesting failed experiment. But unfortunatelythat was not the conclusion Aristotle's successors derived fromworks like the Metaphysics. [9]Soon after, the western worldfell on intellectual hard times. Instead of version 1s to besuperseded, the works of Plato and Aristotle became revered textsto be mastered and discussed. And so things remained for a shockinglylong time. It was not till around 1600 (in Europe, where the centerof gravity had shifted by then) that one found people confidentenough to treat Aristotle's work as a catalog of mistakes. Andeven then they rarely said so outright.If it seems surprising that the gap was so long, consider how littleprogress there was in math between Hellenistic times and theRenaissance.In the intervening years an unfortunate idea took hold: that itwas not only acceptable to produce works like the Metaphysics,but that it was a particularly prestigious line of work, done by aclass of people called philosophers. No one thought to go back anddebug Aristotle's motivating argument. And so instead of correctingthe problem Aristotle discovered by falling into it—that you caneasily get lost if you talk too loosely about very abstract ideas—they continued to fall into it.The SingularityCuriously, however, the works they produced continued to attractnew readers. Traditional philosophy occupies a kind of singularityin this respect. If you write in an unclear way about big ideas,you produce something that seems tantalizingly attractive toinexperienced but intellectually ambitious students. Till one knowsbetter, it's hard to distinguish something that's hard to understandbecause the writer was unclear in his own mind from something likea mathematical proof that's hard to understand because the ideasit represents are hard to understand. To someone who hasn't learnedthe difference, traditional philosophy seems extremely attractive:as hard (and therefore impressive) as math, yet broader in scope.That was what lured me in as a high school student.This singularity is even more singular in having its own defensebuilt in. When things are hard to understand, people who suspectthey're nonsense generally keep quiet. There's no way to prove atext is meaningless. The closest you can get is to show that theofficial judges of some class of texts can't distinguish them fromplacebos. [10]And so instead of denouncing philosophy, most people who suspectedit was a waste of time just studied other things. That alone isfairly damning evidence, considering philosophy's claims. It'ssupposed to be about the ultimate truths. Surely all smart peoplewould be interested in it, if it delivered on that promise.Because philosophy's flaws turned away the sort of people who mighthave corrected them, they tended to be self-perpetuating. BertrandRussell wrote in a letter in 1912: Hitherto the people attracted to philosophy have been mostly those who loved the big generalizations, which were all wrong, so that few people with exact minds have taken up the subject.[11]His response was to launch Wittgenstein at it, with dramatic results.I think Wittgenstein deserves to be famous not for the discoverythat most previous philosophy was a waste of time, which judgingfrom the circumstantial evidence must have been made by every smartperson who studied a little philosophy and declined to pursue itfurther, but for how he acted in response.[12]Instead of quietlyswitching to another field, he made a fuss, from inside. He wasGorbachev.The field of philosophy is still shaken from the fright Wittgensteingave it. [13]Later in life he spent a lot of time talking abouthow words worked. Since that seems to be allowed, that's what alot of philosophers do now. Meanwhile, sensing a vacuum in themetaphysical speculation department, the people who used to doliterary criticism have been edging Kantward, under new names like"literary theory," "critical theory," and when they're feelingambitious, plain "theory." The writing is the familiar word salad: Gender is not like some of the other grammatical modes which express precisely a mode of conception without any reality that corresponds to the conceptual mode, and consequently do not express precisely something in reality by which the intellect could be moved to conceive a thing the way it does, even where that motive is not something in the thing as such. [14]The singularity I've described is not going away. There's a marketfor writing that sounds impressive and can't be disproven. Therewill always be both supply and demand. So if one group abandonsthis territory, there will always be others ready to occupy it.A ProposalWe may be able to do better. Here's an intriguing possibility.Perhaps we should do what Aristotle meant to do, instead of whathe did. The goal he announces in the Metaphysics seems one worthpursuing: to discover the most general truths. That sounds good.But instead of trying to discover them because they're useless,let's try to discover them because they're useful.I propose we try again, but that we use that heretofore despisedcriterion, applicability, as a guide to keep us from wonderingoff into a swamp of abstractions. Instead of trying to answer thequestion: What are the most general truths?let's try to answer the question Of all the useful things we can say, which are the most general?The test of utility I propose is whether we cause people who readwhat we've written to do anything differently afterward. Knowingwe have to give definite (if implicit) advice will keep us fromstraying beyond the resolution of the words we're using.The goal is the same as Aristotle's; we just approach it from adifferent direction.As an example of a useful, general idea, consider that of thecontrolled experiment. There's an idea that has turned out to bewidely applicable. Some might say it's part of science, but it'snot part of any specific science; it's literally meta-physics (inour sense of "meta"). The idea of evolution is another. It turnsout to have quite broad applications—for example, in geneticalgorithms and even product design. Frankfurt's distinction betweenlying and bullshitting seems a promising recent example.[15]These seem to me what philosophy should look like: quite generalobservations that would cause someone who understood them to dosomething differently.Such observations will necessarily be about things that are impreciselydefined. Once you start using words with precise meanings, you'redoing math. So starting from utility won't entirely solve theproblem I described above—it won't flush out the metaphysicalsingularity. But it should help. It gives people with goodintentions a new roadmap into abstraction. And they may therebyproduce things that make the writing of the people with bad intentionslook bad by comparison.One drawback of this approach is that it won't produce the sort ofwriting that gets you tenure. And not just because it's not currentlythe fashion. In order to get tenure in any field you must notarrive at conclusions that members of tenure committees can disagreewith. In practice there are two kinds of solutions to this problem.In math and the sciences, you can prove what you're saying, or atany rate adjust your conclusions so you're not claiming anythingfalse ("6 of 8 subjects had lower blood pressure after the treatment").In the humanities you can either avoid drawing any definite conclusions(e.g. conclude that an issue is a complex one), or draw conclusionsso narrow that no one cares enough to disagree with you.The kind of philosophy I'm advocating won't be able to take eitherof these routes. At best you'll be able to achieve the essayist'sstandard of proof, not the mathematician's or the experimentalist's.And yet you won't be able to meet the usefulness test withoutimplying definite and fairly broadly applicable conclusions. Worsestill, the usefulness test will tend to produce results that annoypeople: there's no use in telling people things they already believe,and people are often upset to be told things they don't.Here's the exciting thing, though. Anyone can do this. Gettingto general plus useful by starting with useful and cranking up thegenerality may be unsuitable for junior professors trying to gettenure, but it's better for everyone else, including professors whoalready have it. This side of the mountain is a nice gradual slope.You can start by writing things that are useful but very specific,and then gradually make them more general. Joe's has good burritos.What makes a good burrito? What makes good food? What makesanything good? You can take as long as you want. You don't haveto get all the way to the top of the mountain. You don't have totell anyone you're doing philosophy.If it seems like a daunting task to do philosophy, here's anencouraging thought. The field is a lot younger than it seems.Though the first philosophers in the western tradition lived about2500 years ago, it would be misleading to say the field is 2500years old, because for most of that time the leading practitionersweren't doing much more than writing commentaries on Plato orAristotle while watching over their shoulders for the next invadingarmy. In the times when they weren't, philosophy was hopelesslyintermingled with religion. It didn't shake itself free till acouple hundred years ago, and even then was afflicted by thestructural problems I've described above. If I say this, some willsay it's a ridiculously overbroad and uncharitable generalization,and others will say it's old news, but here goes: judging from theirworks, most philosophers up to the present have been wasting theirtime. So in a sense the field is still at the first step. [16]That sounds a preposterous claim to make. It won't seem sopreposterous in 10,000 years. Civilization always seems old, becauseit's always the oldest it's ever been. The only way to say whethersomething is really old or not is by looking at structural evidence,and structurally philosophy is young; it's still reeling from theunexpected breakdown of words.Philosophy is as young now as math was in 1500. There is a lotmore to discover.Notes[1]In practice formal logic is not much use, because despitesome progress in the last 150 years we're still only able to formalizea small percentage of statements. We may never do that much better,for the same reason 1980s-style "knowledge representation" couldnever have worked; many statements may have no representation moreconcise than a huge, analog brain state.[2]It was harder for Darwin's contemporaries to grasp this thanwe can easily imagine. The story of creation in the Bible is notjust a Judeo-Christian concept; it's roughly what everyone musthave believed since before people were people. The hard part ofgrasping evolution was to realize that species weren't, as theyseem to be, unchanging, but had instead evolved from different,simpler organisms over unimaginably long periods of time.Now we don't have to make that leap. No one in an industrializedcountry encounters the idea of evolution for the first time as anadult. Everyone's taught about it as a child, either as truth orheresy.[3]Greek philosophers before Plato wrote in verse. This musthave affected what they said. If you try to write about the natureof the world in verse, it inevitably turns into incantation. Proselets you be more precise, and more tentative.[4]Philosophy is like math'sne'er-do-well brother. It was born when Plato and Aristotle lookedat the works of their predecessors and said in effect "why can'tyou be more like your brother?" Russell was still saying the samething 2300 years later.Math is the precise half of the most abstract ideas, and philosophythe imprecise half. It's probably inevitable that philosophy willsuffer by comparison, because there's no lower bound to its precision.Bad math is merely boring, whereas bad philosophy is nonsense. Andyet there are some good ideas in the imprecise half.[5]Aristotle's best work was in logic and zoology, both of whichhe can be said to have invented. But the most dramatic departurefrom his predecessors was a new, much more analytical style ofthinking. He was arguably the first scientist.[6]Brooks, Rodney, Programming in Common Lisp, Wiley, 1985, p.94.[7]Some would say we depend on Aristotle more than we realize,because his ideas were one of the ingredients in our common culture.Certainly a lot of the words we use have a connection with Aristotle,but it seems a bit much to suggest that we wouldn't have the conceptof the essence of something or the distinction between matter andform if Aristotle hadn't written about them.One way to see how much we really depend on Aristotle would be todiff European culture with Chinese: what ideas did European culturehave in 1800 that Chinese culture didn't, in virtue of Aristotle'scontribution?[8]The meaning of the word "philosophy" has changed over time.In ancient times it covered a broad range of topics, comparable inscope to our "scholarship" (though without the methodologicalimplications). Even as late as Newton's time it included what wenow call "science." But core of the subject today is still whatseemed to Aristotle the core: the attempt to discover the mostgeneral truths.Aristotle didn't call this "metaphysics." That name got assignedto it because the books we now call the Metaphysics came after(meta = after) the Physics in the standard edition of Aristotle'sworks compiled by Andronicus of Rhodes three centuries later. Whatwe call "metaphysics" Aristotle called "first philosophy."[9]Some of Aristotle's immediate successors may have realizedthis, but it's hard to say because most of their works are lost.[10]Sokal, Alan, "Transgressing the Boundaries: Toward a TransformativeHermeneutics of Quantum Gravity," Social Text 46/47, pp. 217-252.Abstract-sounding nonsense seems to be most attractive when it'saligned with some axe the audience already has to grind. If thisis so we should find it's most popular with groups that are (orfeel) weak. The powerful don't need its reassurance.[11]Letter to Ottoline Morrell, December 1912. Quoted in:Monk, Ray, Ludwig Wittgenstein: The Duty of Genius, Penguin, 1991,p. 75.[12]A preliminary result, that all metaphysics between Aristotleand 1783 had been a waste of time, is due to I. Kant.[13]Wittgenstein asserted a sort of mastery to which the inhabitantsof early 20th century Cambridge seem to have been peculiarlyvulnerable—perhaps partly because so many had been raised religiousand then stopped believing, so had a vacant space in their headsfor someone to tell them what to do (others chose Marx or CardinalNewman), and partly because a quiet, earnest place like Cambridgein that era had no natural immunity to messianic figures, just asEuropean politics then had no natural immunity to dictators.[14]This is actually from the Ordinatio of Duns Scotus (ca.1300), with "number" replaced by "gender." Plus ca change.Wolter, Allan (trans), Duns Scotus: Philosophical Writings, Nelson,1963, p. 92.[15]Frankfurt, Harry, On Bullshit, Princeton University Press,2005.[16]Some introductions to philosophy now take the line thatphilosophy is worth studying as a process rather than for anyparticular truths you'll learn. The philosophers whose works theycover would be rolling in their graves at that. They hoped theywere doing more than serving as examples of how to argue: they hopedthey were getting results. Most were wrong, but it doesn't seeman impossible hope.This argument seems to me like someone in 1500 looking at the lackof results achieved by alchemy and saying its value was as a process.No, they were going about it wrong. It turns out it is possibleto transmute lead into gold (though not economically at currentenergy prices), but the route to that knowledge was tobacktrack and try another approach.Thanks to Trevor Blackwell, Paul Buchheit, Jessica Livingston, Robert Morris, Mark Nitzberg, and Peter Norvig for reading drafts of this.