MIT researchers find long-term LLM interactions can trigger sycophancy, mirroring users’ beliefs and reducing response accuracy.