Published on February 5, 2025 11:18 PM GMT
Content warning: Explicit talk of recent murders and the views of those behind them.
Recently, there's been more violence involving the Zizians. Details are still coming out, but at the time of this writing, we know that:
- A solo Zizian stabbed and killed the same landlord from the last round of Ziz news.A group of Zizians was involved in a shootout at the Canadian-American boarder.
Between murders, casualties, and suicides, this brings the group's probable body-count up to around eight deaths. At this stage, it's very pressing that more work be done to ensure the safety of those that Ziz and company might otherwise attack, and I support the people who are doing that work.
But I think there's also important work to be done in terms of preventing Ziz et. al from radicalizing new vulnerable people as well. I know of several people who find Ziz's work quite compelling, and I'm sure there's lots of people who would think that if exposed to it in the wrong context. My impression is that these kinds of people can probably be reasoned with, however, and extracted from the recruiting pipeline before it's too late.
So, with this post, I'm going to try to tackle that problem by explaining what I consider to be the major philosophical flaws in the Zizian worldview, and how it tends to get its hooks into people anyway. Hopefully this will feel a bit more like meeting the Ziz-curious where they're at.[1] I'm trying to avoid the potentially negatively polarizing effects of just branding all interest in and association with Zizian ideology as taboo, and instead provide the actual information that should cause one to be skeptical of Ziz's theories and tactics.
Incidentally, this project will involve explaining what that worldview actually is, since the details are quite opaquely presented even in places like Ziz's own blog. I'm in a somewhat advantaged position to explain this, having dug through and contemplated its contents a fair amount over the years. I've even talked to some (non-violent) Zizians about certain details of the theory at length. Per my intentions, this critique of Ziz's ideology will also be interwoven with a fairly clear introduction to what it actually is, which seems to be sorely missing from the corpus on this whole bloody affair.
Anyway, let's get into the details.
Core and Structure
Perhaps the most unique, foundational concept in Ziz's theory of the human psyche is her dichotomy between core and structure.
Basically, a core is the aspect of a mind that contains its fundamental values. Ziz suggests that it's probably not very mutable, if mutable at all. It's ultimately the driver of decisions inside your brain; if it doesn't perceive some action as useful per your values, it just won't dole out motivation for you to take that action. By contrast, structure is all the stuff about your mind that does change, most relevantly your habits and heuristics for actually implementing the basic values of your core.
Here's an explanation in Ziz's own words, for credibility's sake:
There is core and there is structure. Core is your unconscious values, that produce feelings about things that need no justification. Structure is habits, cherished self-fulfilling prophecies like my old commitment mechanism, self-image that guides behavior, and learned optimizing style.
Core is simple, but its will is unbreakable. Structure is a thing core generates and uses according to what seems likely to work.
Now, seeing as core is quite literally the more foundational of these two concepts, let's dig into the details of that one first. Ziz seems to think that cores plays a role very similar to the one a utility function plays in certain AI architectures. A utility function is a mathematical equation that takes in some facts about the world, and spits out a number describing how good or bad that state of the world is. For certain possible designs for AI systems, the idea is basically to encode the AI's "values" in the form of a utility function, which it will then spend its entire existence striving to maximize. This idea resonates with what Ziz says about core in her post Choices Made Long Ago:
I don’t know how mutable core values are. My best guess is, hardly mutable at all or at least hardly mutable predictably.
Any choice you can be presented with, is a choice between some amounts of some things you might value, and some other amounts of things you might value. Amounts as in expected utility.
When you abstract choices this way, it becomes a good approximation to think of all of a person’s choices as being made once timelessly forever. And as out there waiting to be found.
Some people, myself included, resonate aesthetically with this model. It would be deeply satisfying if my motivations followed from basic, immutable values over the course of a lifetime. However, I doubt it maps very well onto how the brain actually works. A known problem with utility maximizing AIs is that running them in real life is wildly computationally intractable, because they make decisions partly by simulating huge numbers of possible futures in detail. Additionally, when you actually observe human behavior, you tend to get the feeling that values are vague and squishy, and even flexible over the course of a lifetime. It's hard to imagine how this behavior would emerge from a system with roughly fixed, consequentialist values.
Ziz's model is further weakened by the existence of a well-evidenced alternative. Reinforcement learning[2] is the method whereby a mind accrues values by a lifelong process of reward and punishment. This theory was originally developed by early psychologists like Pavlov and Skinner, who naturally tailored it to widespread intuitions about how human behavior works in practice. Additionally, its computational feasibility has been vindicated by its usefulness for imbuing value-like behavioral patterns in modern neural networks, such as ChatGPT. Given these considerations, the probability I assign to humans having something like an immutable consequentialist "core" is quite low.
In fairness to Ziz, her theory does have a way of accounting for certain apparent failures to act rationally in accordance with fixed values. She thinks that rather than searching over all possible actions like a classical utility maximizer, the brain actually acts on its values by means of structure, or accumulated cognitive-behavioral strategies which can potentially be ill-suited to a given situation. (One possible reason she gives for this is that a person isn't acknowledging what their true values actually are, and is therefore failing to set up psychological structure that actually aligns with their goals.)
Here's her explanation of this idea from the post False Faces:
Attributing revealed-preference motives to people like this over everything they do does not mean you believe everything someone does is rational. Just that virtually all human behavior has a purpose, is based on at least some small algorithm that discriminates based on some inputs to sometimes output that behavior. An algorithm which may be horribly misfiring, but is executing some move that has been optimized to cause some outcome nonetheless.
So how can you be incorruptible? You can’t. But you [your core] already are. By your own standards. Simply by not wanting to be corrupted. And your standards are best standards! Unfortunately you are are not as smart as you [your structure], and are easily tricked. In order to not be tricked, you need to use your full deliberative brainpower. You and you need to fuse."
(Fusion is Ziz's term for when your values and your habits fall into perfect sync with one another. A lack of self-deception about the former lets you properly develop and execute on the latter. She claims to be very good at this herself.)
Personally, I think the idea that core can fail to generate high-quality structure feels like the wrong move in the argumentative dance. On one hand it's just kind of vague about the mechanisms by which structure is generated and activated, especially in contrast to reinforcement learning which has full-on "model organisms" in the form of neural networks. Relatedly, it feels a little bit like a cognitive backtracking, a theory that's unnaturally contorting itself around observations which aren't predicted by the original idea that humans have core, immutable values. The reinforcement learning model by default predicts that people would be imperfectly rational, and have habits and heuristics which are adapted to some situations but which could misfire in practice. For Ziz's framework, though, this feels a bit more like a complication, and her theory of "poorly designed structure" is a somewhat inelegant solution.[3]
One last point I want to make on the concept of core as-such. If Ziz herself were politely responding to my argument against this aspect of her worldview, she'd probably tell her story about how how she managed to become very much like a single-minded optimizer, by just ceasing to lie to herself about her values and building better structure for them to flow through.
Here's how she described her problem in the post Self-Blackmail:
If you’re not feeling motivated to do what your thesis advisor told you to do, it may be because you only understand that your advisor (and maybe grad school) is bad for you and not worth it when it is directly and immediately your problem. This is what happened to me. But I classified it as procrastination out of “akrasia” [laziness].
And here's how she described what happened after she stopped self-deceiving about her what she cared about in My Journey to the Dark Side:
After a while, I noticed that CFAR’s internal coherence stuff was finally working fully on me. [CFAR was the Center for Applied Rationality, a major rationalist institution.] I didn’t have akrasia problems anymore. I didn’t have time-inconsistent preferences anymore. I wasn’t doing anything I could see was dumb anymore. My S2 ["system two", roughly the conscious mind] snapped to fully under my control.
The first of these claims is probably sometimes on-track for modeling human psychology: maybe you don't care about (e.g.) grad school because you secretly realize it's bad for you. If you don't see a valuable future for the version of you that's working on your current goal, it might be harder to keep dedicating yourself to it.
However, there are other plausible explanations, including ones the reinforcement learning model of human values suggests very naturally. Maybe you don't want to work at some long-term goal because you just haven't been reinforced in ways that correspond to getting it done. After all, the very nature of long-term goals is that you aren't experiencing them being fulfilled very often. A basic weakness of RL as a strategy is that it naturally gives rise to this kind of "high time-preference" behavior, so that's another point in favor of the RL model of human values as opposed to Ziz's notion of the core.
As for Ziz's second claim, that after she discarded her illusions about her values she suddenly gained a huge degree of control over her cognitive and motivational architecture... well, I won't deny the therapeutic effects of letting go of beliefs you don't actually hold very deeply. However, Ziz seems to interpret what was going on inside her head as more or less Unleashing the Utility Maximizer Within; note the phrasing of claims like "I didn't have time-inconsistent preferences anymore," time-inconsistency being a concept that economists use introduced to make expected utility theory better at modeling human behavior. Personally, I think it's more plausible to interpret these claims as having an aspect of self-aggrandizement and confirmation bias, rather than as evidence for her strained and inelegant theory of psychology.
Normally, I wouldn't find an occasional bit of self-aggrandizement too concerning. I think people should generally be allowed a little ego tripping as a treat, especially when it's flowing from legitimate self-improvement. However, this kind of self-aggrandizing manifests in several other regions of Ziz's ideology, to a degree where I think it lends her some dangerous potential to play the role of a kind of cult founder figure. This is particularly evident in the aspect of her worldview I'm going to talk about in the next section: her system for classifying people based on the goodness of their cores' innate values.
Spoilers: she's at the top of the pecking order of virtue.
Good and Evil
Once upon a time, a younger and more innocent Ziz opposed the whole framework of morality. She described it as a "DRM'd ontology", or a way of seeing the world that's designed to socially pressure people into forfeiting some of their freedom to act on their true values.[4] Here's how she puts it in the DRM'd Ontologies post itself:
The language of moral realism describes a single set of values. But everyone’s values are different. “Good” and “right” are a set of values that is outside any single person. The language has words for “selfish” and “selfless”, but nothing in between. This and the usage of “want” in “but then you’ll just do whatever you want!” shows an assumption in that ontology that no one actually cares about people in their original values prior to strategic compromise. The talk of “trying” to do the “right” thing, as opposed to just deciding whether to do it, indicates false faces.
Another artifact of her old anti-morality stance can be found in My Journey to the Dark Side, where she describes the mental motion she performed to free herself from the values others had imposed on her as follows:
Reject morality. Never do the right thing because it’s the right thing. Never even think that concept or ask that question unless it’s to model what others will think. And then, always in quotes. Always in quotes and treated as radioactive.
Make the source of sentiment inside you that made you learn to care about what was the right thing express itself some other way. But even the line between that sentiment and the rest of your values is a mind control virus inserted by a society of flesh-eating monsters to try and turn you against yourself and toward their will. Reject that concept. Drop every concept tainted by their influence.
Per the title of the post, this opposition to conventional morality is the thing she calls "the dark side". "The light side" is the world of fake morality, where people falsely claim to be altruistic just so others are be more willing to cooperate with them. The dark side is more honest, more tractable, and ultimately, more effective – which is why she chose to start identifying with it.
I'm not entirely opposed to this framework. It certainly raises a few red flags; for example, there's always the risk that someone who rejects conventional morality might do things most people would consider repugnant, and the rage and misanthropy of Ziz's wording doesn't do much to quell that anxiety. Ultimately, though, I can't be too mad at someone for giving what I see as basically an accurate analysis of the social function of the language of ethics. I've long had a similar axe to grind about how moral realism positions itself as both ontologically obligatory and grounded in Truth, when in reality it's just communities enforcing their chosen norms by via dual-use language. So, I haven't gotten off the Zizian morality train just yet.
The next stage in the development of Ziz's moral philosophy also has both compelling and concerning aspects. This shift apparently took place from around 2017 to 2019, and saw Ziz deciding to frame her value system as Good anyway. Her claim was that the word "good" more or less implies her particular values (ostensibly something like radical utilitarianism, driven by a desire to save the unthinkable number of humans who might later exist if a technological singularity both occurs and goes well). And to the extent that standard usage doesn't agree with her values, she's willing to fight against culture to increase the degree of alignment.
From a comment Ziz left on DRM'd Ontologies, two years after the post itself went up:
What a turnabout that I’m calling my values “good” after saying “‘Good’ and ‘right’ are a set of values that is outside any single person.”
It turns out my values just happen to correspond as well as language can expect with that word. And i.e., if other people think carnism is okay, and roll that into the standard definition of “good”, then I won’t let them claim this word insofar as convincing me to describe myself as a “villain” like I used to. Because in a sense I care about, and which people I want to communicate with care about, that’s them executing deception and driving out our ability to communicate.
Our word. Hiss.
To me this basically reads as giving up on the truth-above-all pedantry of her previous position, in favor of linguistic convenience as well as engaging in memetic warfare in the name of her values. And there's something commendable about that. Unless you're going to build a conlang, it's going to remain cumbersome to always avoid ought statements or what appear to be claims about what's objectively good. And choosing to embody and fight for one's values is a sign of integrity and pragmatism, both of which are generally virtues.
However, when you consider the hateful and misanthropic tone of much of Ziz's writing, her readiness to militantly assert her own values as Good sets off some alarm bells. In practice, it seems like Ziz's mixture of self-righteousness and cynicism ended up sucking in a small crowd of people who wanted to be Good per Ziz's uniquely high standards, and propagated a worldview that promotes conflict between them (lone crusaders for justice) and those who didn't share their values. That's one story about how this aspect of Ziz's moral philosophy may have contributed to all the recent misguided violence, anyway.
This leads us into a third major component of Ziz's moral outlook, which is almost certainly the most cultish one: her doctrine of good and evil cores. The theory is an extension of the core/structure dichotomy we discussed earlier. The new idea here is that in practice, cores can come into existence with precisely two types of values, which Ziz calls good and evil.
For her, "good people are people with a significant amount of altruism in their cores." (Spectral Sight and Good). In the comments on that post, she further claims that if you're a good person, you'll value even just one other person above yourself, and generally tend to have good consequences for the world. Her glossary entry for good states that it's a "rare property of a core" (emphasis mine), indicating that she sees most humans as lacking these properties. Overall, Goodness means that a core "cares about good at all" due to its "choices made long ago," i.e. its initial value formation process.[5]
Moving onto the other kind of core Ziz discusses: Technically speaking, Ziz's glossary doesn't contrast good cores against evil ones, but rather nongood ones, "nongood" being "a blanket term covering neutral and evil when referring to a human." However, it turns out that "neutral" isn't actually a type of core; instead, her post Neutral and Evil claims that it's the behavioral signature of an evil core trying to come off as more altruistic than it actually is (e.g. because it doesn't want to forego the upsides of being trusted by others). So we can mostly ignore it for our purposes.
As for evil itself, the simple version is that Ziz operationalizes this in terms of cores driven solely by self-interest. However, according to this comment from late in her blog's run, she eventually decided that the real driving force of evil was "cancer and a willful embrace of death". Unfortunately Ziz never fully explains this characterization, instead linking to a blank page called "The Multiverse" whenever the topic comes up, and I haven't been able to piece together much from context either. In any case, the basic idea remains the same: the only alternative to a good, altruistic core is an evil one, which facilitates defection and death. It's a bleak, conflict laden worldview, and if Ziz's update to the definition of evil is anything to go by, it was only getting bleaker over time.
Anyway, logically speaking, the entire project of taxonomizing people according to their basic core values is untenable if you accept my previous argument against the core/structure framework. I do think it's plausible that some people have different "reinforcement triggers" which cause them to tend towards learning relatively pro- or anti-social behavior; I'm guessing clinical psychopaths are an example of the this, lacking basic emotional responses like shame which mainly serve to align humans with the culture around them. However, this model has very different implications than Ziz's theory of good and evil cores.
For one thing, unlike Ziz's theory, the RL model of human values doesn't posit any humans who are more or less ontologically good. The whole question of whether people are "fundamentally" out for themselves or not kind of ceases to make sense, because actions aren't generated by a process that optimizes for some explicit goal. So, it's not actually the case that some people unchangingly aim for what's right, and other people unchangingly aim for what's wrong. So there's overall less need for conflict between the people Ziz would classify as "good" and "evil", compared to what her framework insinuates.
Another, related difference with the RL model is that, compared to the good/evil core model, the former makes space for a whole spectrum of goodness and badness in humans. Because here, goodness and badness is largely to be interpreted in terms of someone's actual actions, rather than some fundamental core, and naturally people's reinforced patterns of behavior can obviously vary in how reliably they improve the world. There's also significantly more room for people to change on the RL model, which again means that in real life there's less necessary conflict between parties who initially seem to have differing objectives. Overall, this picture of the world is just far less of a bloody and bleak us-versus-them conflict between the rare, few good people and the endless hoards of evil.
(It's worth noting that although an RL agent's behavior is quite mutable, this is less true of the basic conditions under which they receive reinforcement. However, there's still a whole range of possible propensities for good and evil that might be baked into into someone's "reinforcement schedule". I'm also skeptical that any such reinforcement schedule would reliably produce people who relentlessly optimize for the expected future well-being of all sentient creatures (a la Ziz's self-narrative). For just one difficulty with this view, recall how "akrasia" or "high time-preference behavior" can naturally emerge in an RL setting.)
Anyway, I've spent a lot of words speculating about the ways Ziz's angry, dualistic, chosen-few theory of morality could have helped foster her group's dangerous behavior, but I'd like to close out with some fairly strong empirical evidence on this point. In 2021, a blogger called Nis appeared, and she was clearly working closely with Ziz. Probably the most shocking post she wrote (although it's not that much of an outlier relative to the rest of her site's tone) was titled Killing Evil "People." Here's the first paragraph from that post.
If you truly irreconcilably disagree with someone's creative choice, i.e. their choice extending arbitrarily far into the past and future, ultimately your only recourse is to kill them. This is the ultimate line of defense against evil, upstream of looking for sus troll lines / bad things done for no good reason except apparent cancerous selfishness, because 'bad' must be evaluated according to your own creative judgement.
Ziz seemed to endorse this position, going so far as to leave a comment that reads: "And I'm so fucking glad to finally have an equal." This reads like the climax to Ziz's arc of increasing extremism and cultish tendencies, and I don't think it deserves an intellectual response. The point of these kinds of statements is clearly something like runaway ingroup signaling with self-righteousness and even elitism sprinkled in. If the faulty intellectual grounds for Ziz's worldview aren't discrediting, the social dynamics at play in this kind of interaction absolutely should be, and it seems like they probably only got worse leading up to the more recent flares of violence associated with Ziz's crowd. If you find this aesthetic attractive, I beg you to examine the psychological reasons that this may be the case, and not just the epistemological ones. It's probably not too late for you to change course.
Coda
So, this whole topic is pretty stressful, huh? I think I'd like to close out by putting everyone's mind at ease a little bit, with some thematically relevant yet decidedly non-morbid poetry. Here's Wild Geese, by the late Mary Oliver.
You do not have to be good.
You do not have to walk on your knees
for a hundred miles through the desert repenting.
You only have to let the soft animal of your body
love what it loves.
Tell me about despair, yours, and I will tell you mine.
Meanwhile the world goes on.
Meanwhile the sun and the clear pebbles of the rain
are moving across the landscapes,
over the prairies and the deep trees,
the mountains and the rivers.
Meanwhile the wild geese, high in the clean blue air,
are heading home again.
Whoever you are, no matter how lonely,
the world offers itself to your imagination,
calls to you like the wild geese, harsh and exciting—
over and over announcing your place
in the family of things.
I should have discussed hemisphere theory, functional decision theory, and the undead typology, but I ran out of steam and wanted to post this sooner rather than later. Sorry for the incompleteness.
- ^
To be clear, my impression is that Ziz herself and several of her close associates are too far gone, and should be treated as hostiles. I see negative expected value in trying to reason with them, which is why I'm publishing this anonymously.
- ^
While editing this piece, I discovered a certain complication. In what was probably a late addition to her glossary, Ziz briefly mentions "the link between cancer and being a reinforcement learner". It's clear from elsewhere that, by the time she as putting up her last posts, Ziz had come to see cancer as the essence of evil; therefore, the connection between cancer and RL means she's almost certainly considered and rejected RL as a universal story about how humans acquire values, and thinks truly Good people do it by some other means instead. This makes it even more likely that if Ziz saw this post, it would be falling on deaf ears.
- ^
If you're being maximally charitable, you can interpret "structure" as a metaphor for the cognitive algorithms developed by deep learning, and core as a metaphor for... the conditions under which your brain doles out reinforcement, maybe? But notice that this feels a bit like Christians re-classifying more and more of the Bible as metaphor as more and more of its empirical claims are falsified.
- ^
"DRM" stands for "digital rights management". It's a feature of certain computer programs that keeps you from using it them ways that would violate the software author's intellectual property rights (and thereby infringes on the user's freedoms). Ziz-circa-2016 sees the standard language we use for talking about morality as a kind of "DRM'd software", in the sense that it too restricts the freedom of its speakers to protect the "rights" of those others.
- ^
Ziz goes back and forth on whether core values are A) biologically immutable, B) traceable to exclusively to early formative experiences, or C) in principle alterable by the right experiences in adulthood. Regardless, her writing generally espouses cynicism about anyone's values ever changing in practice.
Discuss