Published on May 3, 2025 2:45 PM GMT
TL;DR: Recently I wrote a post that got much less karma than I expected. My best guess is that the main reason is that I translated it from Russian to English via ChatGPT, and this easily recognizable LLM-style convinced readers from the first lines that “There aren’t any original thoughts; it is a standard, machine-generated fluff piece”. Is it correct, or does the post itself contain any major issues?
More detailed thoughts:
- I expected the ideas from this post to be quite non-trivial (the math is simple, but practical results are more or less interesting; I mean, even Bayes’ and Aumann’s theorems are very simple from the mathematical point of view, but they bring many useful insights).
- When I read it in the mentioned book, I found it very interesting.When I published the original Russian post in my Telegram channel, it became one of the most liked posts there.It is not easy to see it from the text, but I added many personal insights of my own to the book’s ideas (most of them are “how exactly it goes from the math” and “why it is still working in real life”).
And now, more detailed questions:
- Are my assumptions correct? More specifically, I have the following questions (depending on how much effort you are willing to put in):
- If you see text, which is obviously LLM-written/translated, what is the probability that you would stop reading after one or two paragraphs (except in cases where there is quite a revelation within these paragraphs)?How strong is feeling from first one or two paragraphs of this particular post, that “There is nothing interesting in here; I will barely update; it’s just another LLM-written post”?If you read the whole post - how much information from it is interesting? Are there any significant flaws in its content? How unpleasant is reading such LLM-style text?
Discuss