Right now, chain of thought is pretending to be some kind of fuzzy reasoning process, which I don't believe it is. Humans, I believe, think in stories. stories. If we trained a model from base to post training to use CoTs in a form of stories the way people recount their memories or tell ideas about a certain situation, how they think they're going to result. For instance when you model a physics problem would that do better? Would it do better for some tasks? Is a model capable of producing CoTs like this or does CoT take advantage of some special hidden pattern in current LLMs.
If you are inspired by this idea, you can reach out to the authors for collaboration or cite it:
@misc{holtzman-story-cot-2025,
author = {Holtzman, Ari},
title = {Story CoT},
year = {2025},
url = {https://hypogenic.ai/ideahub/idea/mhakpZmiPP1PFynKutlW}
}Please sign in to comment on this idea.
No comments yet. Be the first to share your thoughts!