LLMs and many generative models appear to be converging in representations (https://arxiv.org/abs/2405.07987 https://arxiv.org/abs/2504.08775) and persona (https://cichicago.substack.com/p/persona-collapse https://arxiv.org/abs/2510.22954).
Does Chain of Thought cause more or less convergence?
If you are inspired by this idea, you can reach out to the authors for collaboration or cite it:
@misc{holtzman-does-chain-of-2026,
author = {Holtzman, Ari},
title = {Does Chain of Thought cause models to converge more?},
year = {2026},
url = {https://hypogenic.ai/ideahub/idea/vbUl35tUGNzskaqwPcCq}
}Please sign in to comment on this idea.
No comments yet. Be the first to share your thoughts!