Do people believe everything they read, even from a respected blogger? Do we have the energy to read different sources of news with more than one political slant? When people ask a trusted friend about something, are their statements reliable? How do they, and should they, determine what is true in what is said? Every Wikipedia article has a history section one can view that shows all the changes people have made. Even though it contains valuable history of how the article came to its current state, which often includes intrigue and controversy, only serious editors tend to view it.
Authors ask people for help on ideas and manuscript drafts, but don’t accept all their suggestions. A user’s requests of AI are formed from and often expose their preconceptions and how they want to hear about things and, as a human listener, attempts to pay some homage to their perspective. GenAI draws from everything it has seen, responding with broad coverage of the topics in an organized presentation. Users get lots of value from its (imperfect) responses. As we have others to check our work and ideas, we need to check it and maybe to ask other AI tools like Perplexity to help give references for the AI responses. Still, maybe we believe what it says better than we believe what people say. The paper “Durably reducing conspiracy beliefs through dialogues with AI” (https://www.science.org/doi/10.1126/science.adq1814) showed that while talking to people about their conspiracy beliefs didn’t change their minds, GenAI’s criticism was more trusted, reducing conspiracy theory beliefs by 20%. We might talk of not trusting AI, but in some cases at least we trust its authority more than we trust people.
Like GenAI, our ideas don’t usually come from nowhere; they are often things we casually heard about that might not be right or controversial. I imagine letting GenAI review text we are writing or reading to help us understand where the ideas we are writing stand in terms of factual basis or general acceptance. How timely our ideas are and how much our ideas fit with what others are saying is important.
Knowing that something is trending or knowing alternative ideas that people are considering might even be as important as factualness. While writing or reading users might use a simple Web search to tell if something is trending. The user interface could color such things purple in the email being drafted or received. AI could be used to ask if something has lots of written material for and/or against it. The user interface could color such things shades of red to make us aware of how controversial it is; pink if a little controversial and red if very controversial. When a purple or red link is selected, AI shows collected examples as a basis for the trending or controversialness of statements people are making. A curious writer or reader can click on the colored phrase like a link and find themselves learning more regarding the things people say about that phrase.
So while people and GenAI can build believable fantasy worlds, they might also check facts and call bullshit.
Justin Gregg’s book “If Nietzsche Were a Narwhal” speaks to the difference between bullshit and lying. Lying is when you know you are wrong and misspeak, bullshit is when you don’t care to know the truth and just say what you want to believe. This blog is a call for using AI to help people consider where they might have made things up for convenience but would be better off knowing that not everyone agrees with them.
The idea of this blog is to imagine a tool that uses AI to encourage people to see perspectives on their writing and reading. Just as conspiracy theories were reduced by seeing alternative perspectives presented by AI, using AI to see alternative perspectives for everything we write and read may be useful in helping us reduce our siloed and biased thinking.
Here is an example from an actual prototype:
As shown, my chrome extension has highlighted in red the text “With the Union’s victory, slavery was abolished nationally.” When the red text is selected, the “Highly Controversial Content” popup is displayed with “Supporting Points” and “Opposing Points.”

Ted Selker is a computer scientist and student of interfaces.