AI Insights on Mental Health: Chuqing Zhao and Yisong Chen’s Research at AIAI 2025

AI Insights on Mental Health: Chuqing Zhao and Yisong Chen's Research at AIAI 2025
Photo: Unsplash.com

By: Deshi Zhang

In an age when millions share their struggles, triumphs, and silent battles online, a new study shows how artificial intelligence might help society listen better. Presented at the IFIP International Conference on Artificial Intelligence Applications and Innovations (AIAI 2025), researchers Chuqing Zhao (Harvard University) and Yisong Chen (Georgia Institute of Technology) unveiled a language-model-powered system that decodes patterns of mental-health expression on social media.

Their work, titled “LLM-powered Topic Modeling for Discovering Public Mental Health Trends in Social Media,” uses advanced generative AI models—including GPT-3.5 and Mistral 7B—to analyze over one million Reddit posts discussing conditions such as depression, ADHD, eating disorders, and PTSD. The system doesn’t diagnose; it detects the subtle linguistic shifts that reveal how communities talk about mental distress, recovery, and hope.

“Every online post carries fragments of emotion—what people fear, how they heal, how they reach out,” said Chen. “Our goal was to teach AI to recognize those patterns without crossing into people’s privacy.”

How the Model Works

The framework merges Large Language Models with contextual topic modeling and zero-shot classification, creating a multi-stage pipeline that identifies, expands, and visualizes key discussion themes. The models learn to associate posts with mental-health topics, explore hidden subthemes (such as “loneliness” or “relapse recovery”), and then project them onto a two-dimensional interactive map powered by MentalBERT embeddings.

The result: a human-readable “mental-health landscape” that allows researchers and clinicians to explore large-scale narratives without manually combing through millions of posts.

AI with Empathy and Ethics

Unlike traditional text-mining methods such as LDA, which rely on rigid word counts, Zhao and Chen’s system leverages in-context reasoning and chain-of-thought prompting—techniques that guide the model step-by-step, much like a human analyst reasoning aloud.

Their experiments revealed that GPT-3.5 achieved the highest topic-alignment accuracy (0.75), outperforming classical probabilistic models by more than threefold. Yet accuracy wasn’t the only goal.

“We wanted transparency,” Chen explained. “It’s not enough for AI to be right—it has to be understandable to people who use its insights.”

A Tool for Understanding Society, Not Individuals

Ethics was central to the research. Posts were anonymized and filtered for readability before analysis. The authors stress that the system is designed for population-level insight, not individual prediction—a distinction increasingly important as AI intersects with mental-health policy.

By visualizing discussions rather than diagnosing users, the team hopes to provide public-health organizations with real-time, privacy-respecting indicators of collective stress or community support needs. Their zero-shot topic model can identify both expected and emerging conversations, enabling adaptive response strategies to crises or sudden societal shifts.

The Broader Vision

Chen’s earlier research focused on AI for financial and healthcare fraud detection—systems that safeguard resources. This project reflects another dimension of his work: using AI to safeguard people.

“Whether we’re protecting data integrity or emotional well-being, the principle is the same,” he said. “Technology should amplify our ability to care, not replace it.”

With growing public concern about mental-health access and misinformation online, their framework represents a new direction for socially responsible AI: data science that listens first, acts second, and explains always.

About the Researchers

Yisong Chen is a Senior Decision Scientist at CVS Health and researcher at Georgia Tech, specializing in AI ethics, causal modeling, and human-centered analytics.
Chuqing Zhao is a Senior Data Scientist at Walmart Inc., and a researcher at Harvard University focused on language models and computational social science.

Zhao, C., & Chen, Y. (2025). LLM-powered topic modeling for discovering public mental health trends in social media. In Artificial Intelligence Applications and Innovations. AIAI 2025 IFIP WG 12.5 International Workshops (Vol. 754, pp. 119–132). Springer. https://link.springer.com/chapter/10.1007/978-3-031-97313-0_10

Disclaimer: The information presented in this article is for informational purposes only and is not intended as medical advice or a substitute for professional mental health care. The AI system discussed does not diagnose individuals but rather analyzes population-level trends. Always consult with a qualified healthcare provider for any mental health concerns or treatment options. The views expressed are those of the authors and do not necessarily reflect the views of any associated organizations.

This article features branded content from a third party. Opinions in this article do not reflect the opinions and beliefs of San Francisco Post.