Circular reasoning

This HN comment made me chuckle because I could imagine it happening to me:

I had a similar experience trying to find a justification for a “House rule” we use when playing Catan (aka “the penny rule”). In trying to find a reputable source, the first result was to an HN comment… that I wrote. The folks we were playing with were not convinced.

The more serious original post is that there’s a lot of circular reasoning going on with LLMs. Essentially, they spit out low quality content or outright nonsense, and that pool of content goes on to become source data for the next round of LLMs in a feedback loop of nonsense.

This is related to point that I’ve previously made in the closing of the canon.

I’ve started using Kagi search and what I really like about their LLM, called FastGPT, is that it provides footnotes. Thereby LLMs become great research aides and librarians, which is where is there true utility lies, once the novelty wears off.