I'm still not sure how I feel about the neuronal basis (seems like no real reason why it should be privileged?), but regardless it feels like there must be a lot of 'neurons' or 'directions' or something that is taken up by lots of complicated categorical and conditional information. "Peanuts are legumes." and "Georgia is both a US state and a country." My bet is that this is a very small component of LLM parameters, but has anyone tried estimating it yet? We should, if not.
If you are inspired by this idea, you can reach out to the authors for collaboration or cite it:
@misc{holtzman-how-much-volume-2026,
author = {Holtzman, Ari},
title = {How much volume is standard knowledge in LLMs?},
year = {2026},
url = {https://hypogenic.ai/ideahub/idea/zpyheALLqDAI6srwbuaL}
}Please sign in to comment on this idea.
No comments yet. Be the first to share your thoughts!