Published on March 28, 2025 4:50 AM GMT
Even an AGI "aligned" to a purpose which doesn't imply humanity's survival but does require the AGI itself to achieve difficult feats like transforming the entire Solar System into something computing as many digits of pi as possible would obviously still need to produce the computing systems and gather the energy necessary for the systems' work. As I mentioned in my previous question, all the electrical energy generated in the world cannot sustain more than agents who interact with GPT-3 a hundred times a day while using 3Wh per interaction. The OpenAI-o3 model apparently requires more than 1 kWh per task.
However, the ARC-AGI task set shows the following trend: as the o3 models taught under the same paradigm increased the rate of success at ARC-AGI-1 tasks from 10% to 75%, the cost increased 500 times. The most expensive known model whose rate of success at ARC-AGI-1 tasks is 10% is GPT-4.5, indicating that a paradigm shift lowers the cost at most seven times. The fact that o1-high model somehow solved 3% of ARC-AGI-2 tasks, unlike the o1-pro model (1%), while the o3-low model solved 4% of the tasks indicates that another paradigm shift and massive scaling are necessary to achieve the ARC-AGI-2 level of performance. Is it likely that the cost per human-level task is actually another hundred of times bigger than the one demonstrated by o3?
Next we turn to the world's energy production. It is about 30 thousands TWh per year, meaning that even the AGI-run civilisation with the current-state energy industry is unlikely to solve more than 30 trillion of OpenAI-o3-level tasks per year or, presumably, more than 300 billion of ARC-AGI-2-level tasks per year (which is less than 1 billion of said tasks per day). On the other hand, the world energy industry in 2022 employed about 67 millions of humans of which 32 millions worked at the fossil fuel sector which generates around 80% of energy. The billion of ARC-AGI-2-level tasks solvable by the AGI per day is just 1.5 OOM away from the aforementioned 32 millions of humans. The statements above seem to show that the AGI will need some more discoveries in neuromorphic calculations, and not just high-level machine learning techniques, to be able to take over the world and run it. In addition, neuromorphic calculations will likely make it more difficult for the AGI to solve many tasks at once, providing hope that even a civilisation maintained by a misaligned AGI will bear more resemblance to the mankind with many individual minds and be very hard to construct.
UPD: If the statements above are true, then aligning the AGI might be easier than we think, since the takeover itself would be more like the actions that are already condemned by the mankind as colonialism.
Discuss