Getty Images
- Elon Musk says AI models are trained on "too much garbage."He said he plans to use Grok to "rewrite the entire corpus of human knowledge."And then he said he would retrain the latest model of Grok on that revision.
When you're Elon Musk, you don't have to rely on centuries of prevailing human understanding — you can create your own.
"We will use Grok 3.5 (maybe we should call it 4), which has advanced reasoning, to rewrite the entire corpus of human knowledge, adding missing information and deleting errors," Musk wrote on X on Friday night.
Then, he said he would retrain Grok's latest model on that new base of knowledge to be free of proverbial waste. "Far too much garbage in any foundation model trained on uncorrected data," he added.
Musk has for years endeavored to create products, like the rebranded Twitter and Grok, that are free from what he views as harmful mainstream constraints.
Business Insider previously reported that Grok's army of "AI tutors" was training the bot on a host of dicey topics to compete with OpenAI's more "woke" ChatGPT. Musk on Saturday asked X users to respond to his post with examples of "divisive facts" that can be used in Grok's retraining.
Gary Marcus, an AI hype critic and professor emeritus at New York University, compared Musk's effort to an Orwellian dystopia, which isn't the first time he's made the comparison.
"Straight out of 1984. You couldn't get Grok to align with your own personal beliefs, so you are going to rewrite history to make it conform to your views," he wrote on X in response to Musk.
A revamped Grok could have real-world impacts.
In May, just as Musk was stepping back from his work in Washington, DC to refocus on his various companies, Reuters reported that DOGE was planning to expand its use of Grok to analyze government data.
"They ask questions, get it to prepare reports, give data analysis," a source told Reuters, referring to how the bot was being used. Two other sources told the outlet that officials in the Department of Homeland Security had been encouraged to use it despite the fact that it hadn't been approved. A representative for the department told the New Republic that "DOGE hasn't pushed any employees to use any particular tools or products."
Grok has also had security issues. In May, after what the company said was an "unauthorized modification" to its backend, the bot started to frequently refer to "white genocide" in South Africa. The company quickly resolved the issue and said it had conducted a "thorough investigation" and was "implementing measures to enhance Grok's transparency and reliability."
xAI did not immediately respond to a request for comment from Business Insider.