The inverse of this psychosis appears to be what the AI techies call "hallucinations". A misleading word meaning that AI simply made up an answer. It is easy to provoke this behavior. Just ask it something that has no meaning, like, "what does the saying that 'all cats are philosophers' mean" ... AI will come up with some BS reply when it should simply say "I am not aware of such a saying and can therefore offer no comment on this topic."

Fire In The Hole