少点错误 2024年08月09日
[LDSL#1] Performance optimization as a metaphor for life
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

文章介绍了编程中的性能优化概念,并将其作为隐喻应用于生活。探讨了生活中的问题和机会往往是罕见、极端、个体化的,与性能优化中的资源使用情况相似。

🎯编程中的性能优化存在权衡,从注重人类读写的软件到注重计算机执行速度的软件是性能优化,反之是高级重写。不建议随机选择代码进行优化,应运行代码后测量资源消耗最多的部分并进行修复,因为代码执行的资源使用情况差异很大,应重点优化消耗最多资源的部分。

💡以性能优化作为生活的隐喻,如多发性硬化症虽只引起轻微的抑郁差异,但对患者影响巨大且相对明显。生活中的问题和机会如代码中的性能问题,往往是罕见、极端、个体化的事情。

🌐文章认为以性能优化喻人生与以全基因模型喻人生的观点相反。在处理混合多种不同长尾现象的情况时,应找出特定案例中的极端因素及其原因和后果,虽然有很多因素未被考虑,但这些因素是‘小’的,并不真正重要。

Published on August 8, 2024 4:16 PM GMT

Followup to: Some epistemological conundrums. Response to: The omnigenic model as a metaphor for life. Part 1 in this post will introduce a concept from programming, part 2 will apply it to life in general. This post is also available on my Substack.

Part 1: Performance optimization

In computer programming, there is a tradeoff between software that is quick and easy for humans to write and read, versus software that is fast for the computer to execute. Moving from the former to the latter is called “performance optimization”.

Moving from the latter to the former is called “high-level rewriting”. This is rarely done intentionally, but a substantial part of the reason computers are sometimes still sluggish despite having gotten orders of magnitude faster is that high-level rewriting has happened implicitly as old software that focused more on performance has been replaced with new software that focuses more on developer speed.

Because programming is done at such a high level, there is almost always a way to do performance optimization in any piece of code. Yet, it’s considered a beginner mistake to just pick a random piece of code to go optimize, which is sometimes encapsulated in sayings like “premature optimization is the root of all evil” or “the epsilon fallacy”. Instead, a recommended approach is to run the code, measure what takes the most resources, and fix that. But why is that so important?

Computers are fast, or alternatively, software/data/people/society is very slow and long-term-oriented compared to individual instructions. Let’s say there’s something a computer can do in 1 microsecond, but you write the code inefficiently, so it takes 1000x too long. This means that it takes… 0.001 seconds, aka basically nothing in most circumstances.

But let’s say that it’s an operation that you need to apply to 10000 datapoints, so you need to repeat it 10000 times. Now suddenly it takes 10 seconds, which is most likely worth optimizing, as otherwise you wouldn’t e.g. want to wait 10 seconds for a page to load. If you need to apply it to each pair of the 10000 datapoints, then you’ve got a 10000 by 10000 grid, or 100000000 repetitions, which will take more than a day to execute, so this absolutely needs to be optimized.

Usually, an execution of some code consists of many pieces whose resource use adds up. Some of these pieces just run a quick operation once, whereas others are multiplied by numbers such as dataset size, internet latency, or similar. This leads to resource use varying by orders of magnitude. If you add up the few pieces that take up the most resources, then there’s usually not much unexplained resource usage, so you should focus on optimizing those pieces, and can’t get much from optimizing the other pieces.

Part 2: As a metaphor for life

In the omnigenic model as a metaphor for life, Scott Alexander presents Fisher’s infinitesimal model (called “the omnigenic model”), which asserts that genes influence phenotypes through a large number of variants of small individual effect. He suggests using this as a general causal model, not just for genomics, but for science and all of life.

Scott Alexander is vaguely aware that this causal model doesn’t quite cover everything:

Now the biopsychosocial model has caught on and everyone agrees that depression is complicated. I don’t know if we’re still at the “dozens of things” stage or the “hundreds of things stage”, but I don’t think anyone seriously thinks it’s fewer than a dozen. The structure of depression seems different from the structure of genetic traits in that one cause can still have a large effect; multiple sclerosis might explain less than 1% of the variance in depressedness, but there will be a small sample of depressives whose condition is almost entirely because of multiple sclerosis. But overall, I think the analogy to genetics is a good one.

… but he doesn’t go that much in depth about it, and ultimately still suggests using similar approaches to genomics, at least as a conceptual ideal, with the greatest challenge being the effect size estimation.

Yet, we could instead use performance optimization as a metaphor for life. Multiple sclerosis only causes slight variance in depressedness, but in those people where it is the cause, not only does it have a huge effect, but also it is presumably relatively obvious, since it is a strong condition with lots of other effects. Multiple sclerosis cannot yet be cured, but it can be somewhat mitigated with certain medical treatments. This I assume is superior to simply treating the depression (with antidepressants?), e.g. since it also helps with other MS effects than simply depression.

I would use performance optimization as a metaphor for life. Similar to how poor software performance in a big piece of code tends to come from singular loops, problems and opportunities in life in general tend to be rare, extreme, individual things.

Scott Alexander brings up another good example in a different blog post, How Bad Are Things?

A perfectly average patient will be a 70 year old woman who used to live somewhere else but who moved her a few years ago after her husband died in order to be closer to family. She has some medical condition or other that prevents her from driving or walking around much, and the family she wanted to be closer to have their own issues, so she has no friends within five hundred miles and never leaves her house except to go to doctors’ appointments. She has one son, who is in jail, and one daughter, who married a drug addict. She also has one grandchild, her only remaining joy in the world – but her drug-addict son-in-law uses access to him as a bargaining chip to make her give him money from her rapidly-dwindling retirement account so he can buy drugs. When she can’t cough up enough quickly enough, he bans her from visiting or talking to the grandchild, plus he tells the grandchild it’s her fault. Her retirement savings are rapidly running out and she has no idea what she will do when they’re gone. Probably end up on the street. Also, her dog just died.

If my patients were to read the above paragraph, there are a handful who would sue me for breach of confidentiality, assuming I had just written down their medical history and gotten a couple of details like the number of children wrong. I didn’t. This is a type.

Here we have a few large factors (most notably wealth, the grandchild, drugs, and aging) which multiplicatively interact to produce a terrible dynamic. If even one of these factors was small, this problem probably wouldn’t be occurring (though e.g. lack of wealth would replace it with a different problem, probably - unless she could move in with her daughter or something). This is precisely analogous to the multiplicative interaction in performance optimization.

If this is how life usually looks, then it suggests precisely the opposite model from the omnigenic model as a metaphor for life. Using statistics to explain variance works well if you have a homogenous population, as then you can find the parameters for the mechanism you are studying. However, statistics works less well if you have a distribution with varying mechanisms.

Probably the biggest problem with statistics is that one might e.g. take the logarithm, in order to capture the multiplicative effects and reduce the sensitivity to outliers. But this makes the statistics entirely focus on the common factors and ignore the special ones, which is a problem because with long tails the special factors are all that matters.

If you are dealing with a case that is affected by a mixture of many different long-tailed phenomena, you can understand it well by figuring out which factors are extreme for this particular case, and what their causes and consequences are. While there will be lots of factors that this doesn’t account for, these factors are “small” and thus don’t really matter.

You might think this is just an artifact of dealing with clinical subjects, who are pre-selected for having a pathology, but this holds even when navigating the absolute basics of your everyday life:

A good analogy would be a table, like in the image above. There’s some big factors that influenced this table; someone put a book and a cup and a watch and a potted plant on it. But these factors don’t actually account for most of the log-variance of the table. The boards of the table have a grain with lots of detailed indentations; the wood is probably covered in dust and microbes of different species. The microbes’ DNA has a lot of entropy, not just in the species-shared aspects, but also in their individual mutations. Further, every molecule vibrates with its own chaotic heat.

One cannot describe the table, only a very small fraction of the factors that influence it. However, the factors vary wildly in their magnitude, and by describing the biggest factors on it, one gets a pretty good idea about the things that matter.

(Even if there are lots of small factors that seem to add up to a big dangerous factor, e.g. some of the microbes are of a dangerous ebola species that could kill you, it is still more feasible to describe them using the cause, e.g. a spill of a sample containing ebola, than it is to list each ebola microbe individually.)



Discuss

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

性能优化 生活隐喻 极端因素 资源使用
相关文章