Paul Graham: Essays 2024年11月25日
Design and Research
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

本文探讨将编程语言视为设计问题而非研究课题的差异,强调设计要以用户为中心,关注用户需求,而非单纯满足用户要求。还提到好的设计需考虑实际用户,且以用户为中心的理念可衍生出诸多设计实践,同时指出‘更糟即更好’的理念在软件及艺术领域的应用。

🎯设计与研究的区别在于新与好,设计需好,研究需新

👤设计要以用户为中心,关注用户需求而非要求

💻好设计要考虑实际用户,包括设计者自身

🎨编程语言设计如同艺术设计,要考虑人的因素

🚀‘更糟即更好’理念强调尽快推出原型并逐步完善

January 2003(This article is derived from a keynote talk at the fall 2002 meetingof NEPLS.)Visitors to this country are often surprised to find thatAmericans like to begin a conversation by asking "what do you do?"I've never liked this question. I've rarely had aneat answer to it. But I think I have finally solved the problem.Now, when someone asks me what I do, I look them straightin the eye and say "I'm designing a new dialect of Lisp." I recommend this answer to anyone who doesn't like being asked whatthey do. The conversation will turn immediately to other topics.I don't consider myself to be doing research on programming languages.I'm just designing one, in the same way that someone might designa building or a chair or a new typeface.I'm not trying to discover anything new. I just wantto make a language that will be good to program in. In some ways,this assumption makes life a lot easier.The difference between design and research seems to be a questionof new versus good. Design doesn't have to be new, but it has to be good. Research doesn't have to be good, but it has to be new.I think these two paths converge at the top: the best designsurpasses its predecessors by using new ideas, and the best researchsolves problems that are not only new, but actually worth solving.So ultimately we're aiming for the same destination, just approachingit from different directions.What I'm going to talk about today is what your target looks likefrom the back. What do you do differently when you treatprogramming languages as a design problem instead of a research topic?The biggest difference is that you focus more on the user.Design begins by asking, who is thisfor and what do they need from it? A good architect,for example, does not begin by creating a design that he thenimposes on the users, but by studying the intended users and figuringout what they need.Notice I said "what they need," not "what they want." I don't meanto give the impression that working as a designer means working as a sort of short-order cook, making whatever the client tells youto. This varies from field to field in the arts, butI don't think there is any field in which the best work is done bythe people who just make exactly what the customers tell them to.The customer is always right inthe sense that the measure of good design is how well it worksfor the user. If you make a novel that bores everyone, or a chairthat's horribly uncomfortable to sit in, then you've done a badjob, period. It's no defense to say that the novel or the chair is designed according to the most advanced theoretical principles.And yet, making what works for the user doesn't mean simply makingwhat the user tells you to. Users don't know what all the choicesare, and are often mistaken about what they really want.The answer to the paradox, I think, is that you have to designfor the user, but you have to design what the user needs, not simply what he says he wants.It's much like being a doctor. You can't just treat a patient'ssymptoms. When a patient tells you his symptoms, you have to figureout what's actually wrong with him, and treat that.This focus on the user is a kind of axiom from which most of thepractice of good design can be derived, and around which most designissues center.If good design must do what the user needs, who is the user? WhenI say that design must be for users, I don't mean to imply that good design aims at some kind of lowest common denominator. You can pick any group of users youwant. If you're designing a tool, for example, you can design itfor anyone from beginners to experts, and what's good designfor one group might be bad for another. The pointis, you have to pick some group of users. I don't think you caneven talk about good or bad design except withreference to some intended user.You're most likely to get good design if the intended users includethe designer himself. When you design somethingfor a group that doesn't include you, it tends to be for peopleyou consider to be less sophisticated than you, not more sophisticated.That's a problem, because looking down on the user, however benevolently,seems inevitably to corrupt the designer.I suspect that very few housingprojects in the US were designed by architects who expected to livein them. You can see the same thingin programming languages. C, Lisp, and Smalltalk were created fortheir own designers to use. Cobol, Ada, and Java, were created for other people to use.If you think you're designing something for idiots, the odds arethat you're not designing something good, even for idiots.Even if you're designing something for the most sophisticatedusers, though, you're still designing for humans. It's different in research. In math youdon't choose abstractions because they'reeasy for humans to understand; you choose whichever make theproof shorter. I think this is true for the sciences generally.Scientific ideas are not meant to be ergonomic.Over in the arts, things are very different. Design isall about people. The human body is a strangething, but when you're designing a chair,that's what you're designing for, and there's no way around it.All the arts have to pander to the interests and limitationsof humans. In painting, for example, all other things beingequal a painting with people in it will be more interesting thanone without. It is not merely an accident of history thatthe great paintings of the Renaissance are all full of people.If they hadn't been, painting as a medium wouldn't have the prestigethat it does.Like it or not, programming languages are also for people,and I suspect the human brain is just as lumpy and idiosyncraticas the human body. Some ideas are easy for people to graspand some aren't. For example, we seem to have a very limitedcapacity for dealing with detail. It's this fact that makesprograming languages a good idea in the first place; if wecould handle the detail, we could just program in machinelanguage.Remember, too, that languages are notprimarily a form for finished programs, but something thatprograms have to be developed in. Anyone in the arts couldtell you that you might want different mediums for thetwo situations. Marble, for example, is a nice, durablemedium for finished ideas, but a hopelessly inflexible onefor developing new ideas.A program, like a proof,is a pruned version of a tree that in the past has hadfalse starts branching off all over it. So the test ofa language is not simply how clean the finished program looksin it, but how clean the path to the finished program was.A design choice that gives you elegant finished programsmay not give you an elegant design process. For example, I've written a few macro-defining macros full of nestedbackquotes that look now like little gems, but writing themtook hours of the ugliest trial and error, and frankly, I'm stillnot entirely sure they're correct.We often act as if the test of a language were how goodfinished programs look in it.It seems so convincing when you see the same programwritten in two languages, and one version is much shorter.When you approach the problem from the direction of thearts, you're less likely to depend on this sort oftest. You don't want to end up with a programminglanguage like marble.For example, it is a huge win in developing software tohave an interactive toplevel, what in Lisp is called aread-eval-print loop. And when you have one this hasreal effects on the design of the language. It would notwork well for a language where you have to declarevariables before using them, for example. When you'rejust typing expressions into the toplevel, you want to be able to set x to some value and then start doing thingsto x. You don't want to have to declare the type of xfirst. You may dispute either of the premises, but ifa language has to have a toplevel to be convenient, andmandatory type declarations are incompatible with atoplevel, then no language that makes type declarations mandatory could be convenient to program in.In practice, to get good design you have to get close, and stayclose, to your users. You have to calibrate your ideas on actualusers constantly, especially in the beginning. One of the reasonsJane Austen's novels are so good is that she read them out loud toher family. That's why she never sinks into self-indulgently artydescriptions of landscapes,or pretentious philosophizing. (The philosophy's there, but it'swoven into the story instead of being pasted onto it like a label.)If you open an average "literary" novel and imagine reading it out loudto your friends as something you'd written, you'll feel all tookeenly what an imposition that kind of thing is upon the reader.In the software world, this idea is known as Worse is Better.Actually, there are several ideas mixed together in the concept ofWorse is Better, which is why people are still arguing aboutwhether worseis actually better or not. But one of the main ideas in thatmix is that if you're building something new, you should get aprototype in front of users as soon as possible.The alternative approach might be called the Hail Mary strategy.Instead of getting a prototype out quickly and gradually refiningit, you try to create the complete, finished, product in one longtouchdown pass. As far as I know, this is arecipe for disaster. Countless startups destroyed themselves thisway during the Internet bubble. I've never heard of a casewhere it worked.What people outside the software world may not realize is thatWorse is Better is found throughout the arts.In drawing, for example, the idea was discovered during theRenaissance. Now almost every drawing teacher will tell you thatthe right way to get an accurate drawing is not towork your way slowly around the contour of an object, because errors willaccumulate and you'll find at the end that the lines don't meet.Instead you should draw a few quick lines in roughly the right place,and then gradually refine this initial sketch.In most fields, prototypeshave traditionally been made out of different materials.Typefaces to be cut in metal were initially designed with a brush on paper. Statues to be cast in bronze were modelled in wax. Patterns to be embroidered on tapestrieswere drawn on paper with ink wash. Buildings to beconstructed from stone were tested on a smaller scale in wood.What made oil paint so exciting, when itfirst became popular in the fifteenth century, was that youcould actually make the finished work from the prototype.You could make a preliminary drawing if you wanted to, but youweren't held to it; you could work out all the details, andeven make major changes, as you finished the painting.You can do this in software too. A prototype doesn't have tobe just a model; you can refine it into the finished product.I think you should always do this when you can. It lets youtake advantage of new insights you have along the way. Butperhaps even more important, it's good for morale.Morale is key in design. I'm surprised peopledon't talk more about it. One of my firstdrawing teachers told me: if you're bored when you'redrawing something, the drawing will look boring.For example, suppose you have to draw a building, and youdecide to draw each brick individually. You can do thisif you want, but if you get bored halfway through and startmaking the bricks mechanically instead of observing each one, the drawing will look worse than if you had merely suggestedthe bricks.Building something by gradually refining a prototype is goodfor morale because it keeps you engaged. In software, my rule is: always have working code. If you're writingsomething that you'll be able to test in an hour, then youhave the prospect of an immediate reward to motivate you.The same is true in the arts, and particularly in oil painting.Most painters start with a blurry sketch and graduallyrefine it.If you work this way, then in principleyou never have to end the day with something that actuallylooks unfinished. Indeed, there is even a saying amongpainters: "A painting is never finished, you just stopworking on it." This idea will be familiar to anyone whohas worked on software.Morale is another reason that it's hard to design somethingfor an unsophisticated user. It's hard to stay interested insomething you don't like yourself. To make something good, you have to be thinking, "wow, this is really great,"not "what a piece of shit; those fools will love it."Design means making things for humans. But it's not just theuser who's human. The designer is human too.Notice all this time I've been talking about "the designer."Design usually has to be under the control of a single person tobe any good. And yet it seems to be possible for several peopleto collaborate on a research project. This seems tome one of the most interesting differences between research anddesign.There have been famous instances of collaboration in the arts,but most of them seem to have been cases of molecular bonding ratherthan nuclear fusion. In an opera it's common for one person towrite the libretto and another to write the music. And during the Renaissance, journeymen from northernEurope were often employed to do the landscapes in thebackgrounds of Italian paintings. But these aren't true collaborations.They're more like examples of Robert Frost's"good fences make good neighbors." You can stick instancesof good design together, but within each individual project,one person has to be in control.I'm not saying that good design requires that one person thinkof everything. There's nothing more valuable than the adviceof someone whose judgement you trust. But after the talking isdone, the decision about what to do has to rest with one person.Why is it that research can be done by collaborators and design can't? This is an interesting question. I don't know the answer. Perhaps,if design and research converge, the best research is alsogood design, and in fact can't be done by collaborators.A lot of the most famous scientists seem to have worked alone.But I don't know enough to say whether thereis a pattern here. It could be simply that many famous scientistsworked when collaboration was less common.Whatever the story is in the sciences, true collaborationseems to be vanishingly rare in the arts. Design by committee is asynonym for bad design. Why is that so? Is there some way tobeat this limitation?I'm inclined to think there isn't-- that good design requiresa dictator. One reason is that good design has to be all of a piece. Design is not just for humans, butfor individual humans. If a design represents an idea that fits in one person's head, then the idea will fit in the user'shead too.Related:

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

编程语言设计 以用户为中心 设计与研究 更糟即更好 艺术设计
相关文章