AI for science currently behaves more like a turbo-charger on existing agendas than a tool for asking whether we are solving the right problems. It accelerates what is already well funded, easily measurable, and convenient for CS labs, while messy, politically inconvenient, or low-profit problems that shape real lives remain in the dark.
Gestational diabetes mellitus (GDM) is a telling example. Routine screening in many health systems still occurs at 24–28 weeks of pregnancy, long after metabolic changes begin and when prevention options are constrained. Meanwhile, women’s health conditions are systematically underfunded relative to their burden; chronic disorders like endometriosis can impose an economic cost per patient comparable to, or greater than, diabetes, yet progress and funding lag far behind. These gaps do not exist because GDM or endometriosis are scientifically dull, but because they are entangled with pregnancy, menstruation, chronic pain, and gendered stigma—topics that attract less money, less political attention, and fewer glamorous AI benchmarks.
The same pattern appears beyond biomedicine. In education, social care, and public health, practitioners have documented practice gap for decades: the hardest everyday problems rarely match the neat proxies used in academic models. AI researchers optimize benchmarks and publish; patients wait years and anything unprofitable stays broken.
My question is therefore intentionally critical: can we use LLMs not just to automate what scientists already do, but to systematically expose what they ignore? I treat LLMs as gap detectors that read both sides of reality—papers, grants, and benchmarks on one side; clinical notes, incident reports, worker complaints, patient forums, etc on the other—and surface where pain is high but attention is low. We need a LLM-based agentic system that keeps science answerable to the world it claims to improve.
If you are inspired by this idea, you can reach out to the authors for collaboration or cite it:
@misc{z-can-llms-expose-2025,
author = {Z, Amber},
title = {Can LLMs Expose What Science Refuses to See?},
year = {2025},
url = {https://hypogenic.ai/ideahub/idea/68eUHtLQpxwQ1MuH6bbu}
}Please sign in to comment on this idea.
No comments yet. Be the first to share your thoughts!