I think the disagreement here is semantics around the meaning of the word “lie”. The word “lie” commonly has an element of intent behind it. An LLM can’t be said to have intent. It isn’t conscious and, therefor, cannot have intent. The developers may have intent and may have adjusted the LLM to output false information on certain topics, but the LLM isn’t making any decision and has no intent.
I think the disagreement here is semantics around the meaning of the word “lie”. The word “lie” commonly has an element of intent behind it. An LLM can’t be said to have intent. It isn’t conscious and, therefor, cannot have intent. The developers may have intent and may have adjusted the LLM to output false information on certain topics, but the LLM isn’t making any decision and has no intent.