LLMs are bad at "conditional forgetting"

by Haokun Liu6 months ago
8

When giving humans analogical or hypothetical scenarios and rules, human can apply them effectively, e.g., introducing new rules in chess. However, LLMs may struggle in those scenarios, even with the latest reasoning models. (See Doug Downey's talk)

The hypothesis here is that humans can "conditionally forget" what they know, e.g., existing chess rules, and apply the new rules, while LLMs cannot do this effectively.

If you are inspired by this idea, you can reach out to the authors for collaboration or cite it:

@misc{liu-llms-are-bad-2025,
  author = {Liu, Haokun},
  title = {LLMs are bad at "conditional forgetting"},
  year = {2025},
  url = {https://hypogenic.ai/ideahub/idea/dLfpQ7mR6I8VZ8RLCj06}
}

Comments (3)

Please sign in to comment on this idea.

Lovelace-6315c9021b5bAnonymous6 months ago

I was thinking about something similar -- how models handle conflicting rules or instructions. It feels related to concept incongruence :)

2
Haokun Liu6 months ago

Should generate a paper for this!

1
Chenhao Tan6 months ago

Let us do it!

0