Published on June 25, 2025 4:52 PM GMT
I shall ask, then, why is it really worth while to make a serious study of mathematics? What is the proper justification of a mathematician’s life? And my answers will be, for the most part, such as are expected from a mathematician: I think that it is worth while, that there is ample justification. But I should say at once that my defence of mathematics will be a defence of myself, and that my apology is bound to be to some extent egotistical. I should not think it worth while to apologize for my subject if I regarded myself as one of its failures.
(G. H. Hardy, A Mathematician’s Apology, 1940, p.3)
Earlier this week, I was having a lovely chat with a friend and reader of the blog, when he asked me a question along the lines of “Hey, what’s the overall point of your blog?”
I get why he asked the question: I write about a lot of different topics, from self-help models to the discovery of oral composition, historiography, and cooking. Looking in from the outside, it’s probably hard to see a common thread. Even from the inside, a lot of these ideas emerged from my wide-ranging curiosity, and what felt meaningful to explore and ponder at the time.
So during this conversation, I deflected answering the question — it’s about whatever methodological ideas I found interesting. Or maybe it’s about a bigger thing, but I don’t want to suck the fun out of my personal writing by enshrining a massive should on top of it.
Since then I’ve come back to this question multiple times, and noticed that it was an interesting one. It elicited new thoughts and intuitions, it clarified some vaguely felt priorities, and most important, it imbued the pursuit with more meaning without the too-often side-effect of anxiety.
Thus, I want to offer an apology (a defence) of Methodology as I envision it. I’m not claiming that everyone should stop whatever they’re doing and start studying it full-time (heck, I’m not doing that), but I still sense both meaning and practical usefulness from a deep theory of Methodology.
What Is Methodology?
At its heart, my vision of Methodology stems from a fascination with all the ways human being solve problems.
Said like that, it sounds too dry, too technical. I don’t just mean science and engineering, though these definitely count. I’m also fascinated by how we make tasty meals, how we figure out completely different traditions, how we anticipate problems before doing the thing, how we recognize good gelato, how we improve our models of ourselves, how we give the impression of life in 2D animation…
I don’t know if I’m curious about literally anything, but honestly, I have been surprised before by how a completely unexpected interest bubbles up when I start smelling the subtleties of a domain. For example, I believed that I cared nothing for politics, law, institutional design, until I started actually reading about these topics[1], observing them, and learning to see the methods behind them.
So to me, the purest question of Methodology is “How do these methods work?”
Which almost instantly leads to the complement: “Why are we systematically failing at solving some problems, given our wealth of powerful methods?” Because sometimes humans fail completely at tackling important problems. Nutrition is still a joke, AI Alignment has made next to no progress in ten years, our best understanding of the internals of LLM is ridiculously insufficient, and institutions and trust still decay in ways that we are unable to curb.
And considering both successes and failures side by side raises maybe the last, and yet deepest question of Methodology: “Why do the methods that bring in the successes fail to solve the open problems?” An instance is: why do physicists almost always fail to solve complex problems outside of their discipline, despite their incredible success within it, and the strong selection effect for IQ in their education? We have some of the smartest people in the world, armed with the tools and tricks that have provided our deepest and most powerful theories ever, breaking their teeth on social sciences, on AI alignment, on medicine. There something to understand here.[2]
Methodology assumes that there is a united and coherent answer to these questions.
This is not a given. Paul Feyerabend, in Against Method, after which this blog is named (in opposition), used his formidable analytical skills to show the vast subtleties and complexity of methods, undermining the logical positivism for an extremely simple and universal method behind all of science.
And then, following his anarchist leanings, he jumped from “there are more subtleties in methods than your philosophy can dream” to “there cannot be any general method, so anything goes”.
It is clear, then, that the idea of a fixed method, or of a fixed theory of rationality, rests on too naive a view of man and his social surroundings. To those who look at the rich material provided by history, and who are not intent on impoverishing it in order to please their lower instincts, their craving for intellectual security in the form of clarity, precision, 'objectivity', 'truth', it will become clear that there is only one principle that can be defended under all circumstances and in all stages of human development. It is the principle: anything goes.
(Paul Feyerabend, Against Method, 1975 , p.19)
With Feyerabend, I agree that most of the ridiculously oversimplified logical model of XXth century Philosophy of Science (what Mark Wilson calls “Theory-T Thinking”) cannot do justice to the wealth of variation in human methods, and that for that reason it is hopelessly insufficient.
Yet that doesn’t mean there cannot be a theory of methods that captures and respects this variety. Indeed, I believe the seeds of such a theory already exist.
The Bedrock of Regularities
The revelation of this key to unlocking Methodology lay in a weird book, named Physics Avoidance, by Mark Wilson. It is somehow a philosophy of language book, that uses detailed analysis of actual science, mostly material science and continuum mechanics, to rectify by analogy weaknesses in both philosophy of science and philosophy of language.
For whatever reason, it picked my curiosity enough to order it and actually start reading it, despite a strong dislike for philosophy of language. And amidst Wilson’s witty and digressive analyses of language and science, I ran into this passage:
We are not supernatural intellects; we forever remain the evolved descendants of humble hunter-gatherers, who must cobble together and redirect our modest computational inheritance in the pursuit of more sophisticated objectives. Philosophers often proceed on the presumption that we possess bigger brains and inferential skills than we do, able to juggle descriptive parameters and computational processes far beyond our actual capacities. But with a limited stock of words and smallish brains, we must forever seek roundabout strategies that allow us to handle the extremely large range of challenges that we confront within science and everyday practice […].
(Mark Wilson, Physics Avoidance, 2017, p.4)
The idea is retrospectively obvious, and yet it was a complete paradigm change for me. Wilson doesn’t ask the question of why we fail at solving thorny problems of medicine, institutional design, or AI alignment; instead he turns the inquiry around, and asks how on earth are we even solving the most basic physics problems with all our computational limitations. Most problems, when you look into them deeply, in principle require you to simulate an enormous number of variables and possibilities, which our supercomputers cannot do, let alone our puny brains. And yet the physics is made, the bridges stand, the cooking is delicious (modulo a bit of skill)
His answer to this conundrum? We cheat. We never directly confront the computational intractability of our problems, instead shortcutting them:
- Many/most physics processes follow “well-behaved” mathematical function, so we can guess right quite quickly;Cooking almost never depends on precise ratio (but patissing and baking do), which means that we can cook delicious meals, even fitting a recipe, without getting the scale out; orOur life-recognition algorithms mostly rely on the deformation of living bodies to recognize movement, and so 2D animation mostly only has to get that right to sell the living presence of characters.
In the context where he writes, Wilson calls these tricks instances of “physics avoidance”.
And so it is within practical science. The surest and happiest routes to predictive, explanatory, and design success do not always lie directly ahead, but employ clever stratagems for evading the computational hazards that render the direct path unpassable. Such evasive approaches succeed through adopting various covert strategies for what I shall call physics avoidance. Most physical treatments one encounters in real life are characterized by some form of deductive evasion. Plainly, we should skirt difficult calculations if they aren’t required for the answers we seek and are prone to introduce errors.
(Mark Wilson, Physics Avoidance, 2017, p.52)
For these shortcuts to work, though, reality needs to offer the opportunities for them. That is, we can only avoid the intractability if the system under study or design has properties which we can exploit for such shortcuts.
I have been calling these properties “epistemic regularities” in previous posts, while Mark Wilson calls them “descriptive opportunities”.
To this end, the [models at different scale] within a multiscalar modeling [of a material such as granite or steel] are often calibrated to correspond to the descriptive opportunities offered within nature itself. What do I mean by a descriptive opportunity? (a phrase that will reverberate often in the pages ahead). Answer: physical circumstances whose dominant ranges of variation can be adequately captured in a smallish number of descriptive parameters and where questions of significant interest can be addressed through feasible calculation. All of the characteristic behaviors we assigned to steel or granite at different RVE levels supply “opportunities” of this character; a multiscalar modelling needs to link them together in a fruitful manner.
(Mark Wilson, Physics Avoidance, 2017, p.17)
This is particularly exciting because all three of our core questions above have answers in terms of regularities:
- Successful methods work because they exploit underlying regularities;Open questions are hard because they lack the underlying regularities that we know how to exploit; andMethods often fail to transfer from successful fields to “hard” ones because the regularities on which the methods rely are just not present in the latter.
To be clear, the realization of the importance of regularities is only the beginning. There are still so many open questions that I have, and I am far from a full theory of regularities, one which can take any method and explain why it work (or not) by figuring out the underlying regularities of the situation (and the ones required by the method).
Yet I still think that regularities is a key idea for shaping my paradigm of Methodology: at this point, I expect that a mechanistic model of why methods work/don’t work will use the regularity as one of its core concepts. And a big part of what I end up doing in my reading and in this blog is to find more case studies and explore more instances of such regularities, in the hope of building this mechanistic model myself.
Refactoring Citrus Advice
Let’s imagine that we’re in the future, where we have a mechanistic model of why methods work or don’t, based on the concept of regularities.
What could we do with it?
I believe this is where the practical usefulness of Methodology would emerge. Because the useful thing we want out of Methodology is a theory of methods that would let us, for any new situation, generate a new adequate method based on our understanding of how methods work in general.
And the biggest blocker to the building of such a practical theory of method is our confusion about why our methods work, or don’t. Since we lack a mechanistic model for it, our “explanations” are often ad-hoc and wrong, focusing on the wrong aspects of the method.
Indeed, the rewards of adventitious inferential exploration constitute one of this book’s central themes—swift conceptual advance is often achieved through the transfer and adaptation of a familiar computational routine from one setting to another. At the same time, such advances frequently create a fair degree of conceptual confusion, for the true strategic underpinnings of a newly adapted reasoning scheme are often elusive.
(Mark Wilson, Physics Avoidance, 2017, p.88)
Writing about software engineering, Jimmy Koppel provides a nice name for such methods-without-understanding: “citrus advice”, based on the initial proposed solution to cure scurvy.
Scurvy is an ancient disease caused by Vitamin C deficiency. After thousands of years of deaths, finally the 1700s, a British scientist named James Lind proved in an experiment the cure: eat citrus. And so the by the end of the century, the Royal Navy had started requiring all of their sailors to drink lemon juice. Scurvy rates went from crippling to almost none.
But then in the 19th century, they lost the cure. They didn't understand it well enough. They thought, "Lemons? Why not limes?" Big mistake. As a result, there were multiple voyages to the poles as late as 1910 in which everyone got scurvy and people died.
Now imagine that you are an 18th century sailor and you ask the powers that be, "How can I prevent scurvy?" The answer comes, "Eat citrus…..but wait! Lemons, not limes. And wait — do not boil it. And wait — do not store the juice in copper. If you store it in copper, it's worthless." This is kind of starting to sound like witch doctory.
From this, I coin the term citrus advice. The advice to eat citrus is good advice. It saved people’s lives. But it was not precise enough to capture the reality, and so it came with this long list of caveats that made it hard to use.
The precise advice is to get vitamin C. And while it took about 200 years to get from citrus to vitamin C, this is the ideal to be aiming for: the simple thing that explains all of the caveats.
(Jimmy Koppel, My Strange Loop talk: You are a Program Synthesizer, 2018)
My hope is that a mechanistic model of regularities will enable us to go from citrus advice to minimal and justified methods. And then, these very methods can be much more easily compared and grouped together, yielding much less noisy patterns to build a practical theory of methods from.
After that the sky is the limit: a practical theory of method would enable us to tackle many more situations, and expand our ability to solve problem to these hard and open fields we mentioned above.
Conclusion
Having reached the end of this post, I see that I have given three different apologies for Methodology.
The first was purely about appreciation and curiosity, the promises of asking why methods work or don’t, and discovering the bewildering and enchanting range of methodological innovations and difficulties brought up by humans.
The second was about understanding, through regularities, why these methods work or don’t. It’s the excitement of finding an all-encompassing theory that captures so many interesting phenomena in a handful of key concepts articulated well.
The last was about usefulness: a mechanistic model of regularities would enable the refactoring of current methods from citrus advice to minimal and justified methods. Which then would enable the development of a practical theory of methods, eventually creating a powerful tool for solving all the problems we’re breaking our teeth on today.
- ^
Robert Caro’s amazing biographies probably have a hand in that change.
- ^
Probably the most recent thinking I have published on this topic is this post on the value of studying hard or “failed” fields for Methodology.
Discuss