Layer-wise Probing Analysis of Belief Encoding in LLMs

by Chenxi Peng3 months ago
6

Inspired by Professor Tan’s inquiry, “Do LLMs differentiate epistemic belief from non-epistemic belief?”, and the psychological framework established by Vesga et al. (2025), we aim to conduct a layer-wise probing analysis of open-source Large Language Models (LLMs)—specifically OpenAI’s GPT-2 and the Meta Llama series. Our objective is to identify which internal layers encode belief types and determine whether these representations reflect deep semantic understanding or merely surface-level linguistic patterns. Future work may include application to hallucination detection.

If you are inspired by this idea, you can reach out to the authors for collaboration or cite it:

@misc{peng-layerwise-probing-analysis-2026,
  author = {Peng, Chenxi},
  title = {Layer-wise Probing Analysis of Belief Encoding in LLMs},
  year = {2026},
  url = {https://hypogenic.ai/ideahub/idea/uRpccVOe4BmpJo6uDBPN}
}

Comments (0)

Please sign in to comment on this idea.

No comments yet. Be the first to share your thoughts!