Unveiling Truths, Connecting Communities

Unveiling Truths, Connecting Communities

Search
Close this search box.

Hippocratic AI Founder Munjal Shah Raises $53 Million at a $500 Million Valuation for Health Care LLMs

Hippocratic AI Founder Munjal Shah Raises $53 Million at a $500 Million Valuation for Health Care LLMs
Photo Courtesy: Munjal Shah

By: Hippocratic AI

Munjal Shah stressed the company’s “safety-first journey” and plans to fund Phase 3 of tests of its constellation of LLMs, which provide nondiagnostic medical services.

Hippocratic AI, a startup focused on integrating generative artificial intelligence into nondiagnostic health care applications, announced a valuation of $500 million following the closure of a recent $53 million Series A funding round. This latest round brings the company’s total funding to $120 million. It was led by Premji Invest and General Catalyst, along with contributions from SV Angel, Memorial Hermann Health System, and existing investors such as Andreessen Horowitz (a16z) Bio + Health, Cincinnati Children’s, WellSpan Health, and Universal Health Services.

Hippocratic AI co-founder and CEO Munjal Shah says the company will use the funds to accelerate product development and conduct Phase 3 of testing of its healthcare large language model.

“When we started the company, we prioritized safety as our top value. This is why we named the company after the physicians’ Hippocratic oath and made the tagline ‘Do No Harm,’” said Shah in a statement. “This has been our guiding principle since the company’s founding. Our focus on safety testing our product in multiple phases and transparent publication of the results for everyone to see is the next down payment in this safety-first journey. Our selection of partners who align with our values and have the patience to let us pursue safety over revenue and profits further underscores our commitment to these values.”

What Does Hippocratic AI Do?

Munjal Shah founded Hippocratic AI in 2023 to develop artificial intelligence agents that assist with low-risk, nondiagnostic, patient-facing tasks. The company is building a “constellation” of LLMs that use generative AI, the technology behind popular AI chatbots like ChatGPT, to provide services like patient navigation, chronic care nursing, and dietitian advice.

While a chatbot can’t replace a human nurse, the hope is that the company’s technology can help address the global shortage of healthcare professionals such as nurses, social workers, and nutritionists by reaching more patients than would be feasible for a human facing the stresses of everyday medical work and life. 

Hippocratic AI’s technology is based on its Polaris architecture, a multiagent LLM constellation optimized for real-time healthcare conversation. This system combines a primary generative AI conversational agent with numerous specialist support agents, each honed for specific medical tasks. The design allows for nuanced, patient-friendly dialogue that aligns closely with the professional conduct and tone of nurses, medical assistants, social workers, and nutritionists.

Training Empathetic AI

The training process for these AI agents is designed to prepare them for a wide array of patient interactions. Initial training phases involve evidence-based research and simulated conversations crafted with the input of U.S.-licensed nurses and patient actors. This approach facilitates the development of AI agents that both understand medical information and can engage in natural and supportive dialogue with patients.

Subsequent rounds of training incorporate AI-generated conversations, which healthcare professionals review and potentially revise. This iterative process allows for continuous improvement and adaptation of the agents’ conversational skills and medical knowledge.

The AI is trained to perform tasks such as checking on patient wellness, reviewing medication adherence, and providing preoperative instructions within a supportive and empathetic conversation. 

In March 2023, Hippocratic AI announced a partnership with chipmaker Nvidia to reduce latency for real-time patient interactions.

 “Nvidia’s technology stack is critical to achieving the conversational speed and fluidity necessary for patients to naturally build an emotional connection with Hippocratic’s Generative AI healthcare agents,” said Munjal Shah in a statement

How Does an LLM Compare to a Human Health Care Worker?

Over 1,100 U.S.-licensed nurses and more than 130 U.S.-licensed physicians have participated in testing the LLMs across various dimensions, including medical safety, conversational appropriateness, and empathy. 

For Phase 3, the company’s testing criteria stipulate that 5,000 licensed nurses and 500 licensed physicians and the enterprise’s health system collaborators must fulfill the testing requirements.

“We’re going to sit here until it’s safe, as determined by clinicians,” Shah stated in a recent interview.

In addition to announcing its latest funding round, Hippocratic AI released data from its internal research comparing its LLMs with human medical professionals and other LLMs. The results suggest that its agents are learning to outperform humans in several nondiagnostic tasks.

It was found that AI delivered correct medical advice 96.79% of the time, which is notably higher than the 81.16% for human nurses alone. Instances of incorrect advice from the AI that resulted in no harm were 1.83%, minor harm was 1.32%, and severe harm was a negligible 0.06%.

In contrast, human nurses provided incorrect advice, leading to no harm in 14.72% of cases and minor harm in 4.12% of cases; there were no cases of severe harm. 

In addition, Hippocratic AI’s constellation significantly outperformed Google’s Llama 2 70B Chat and even OpenAI’s GPT-4 in recognizing the impact of medications on lab tests, with a success rate of roughly 79%. This rate was higher than GPT’s 74%, Llama’s 0%, and human nurses’ 63%. 

It also excelled at identifying condition-specific disallowed over-the-counter drugs, achieving an 88.73% success rate, a task where, again, both the other LLMs and humans lagged considerably behind, with success rates ranging from 30-45%. For detecting toxic OTC dosages, Hippocratic AI’s system scores 81.50%, compared to GPT-4’s 38.06%, Llama 2’s 9.11%, and humans’ 57.64%. 

Hippocratic AI has also asked human nurses to compare its LLM’s performance of bedside manner measures to those of other human nurses. Patients felt nearly equally comfortable confiding in both the LLMs and human nurses, with scores of 88.93% and 88.81%, respectively. Still, the LLM was deemed more successful at getting to know patients individually, scoring 78.43% compared to human nurses at 57.58%. The LLMs also excelled in creating and seizing opportunities to educate patients about their conditions, a key goal for Hippocratic AI, achieving an 89.82% score and outperforming human nurses who scored 80.64%.

However, despite these promising results, human nurses were still considered slightly more effective overall, with a score of 87.34%, topping the AI’s 85.66%. 

The Future of Healthcare AI

Hippocratic AI’s development indicates a cautious yet optimistic approach toward integrating AI into health care. The company continues to carry out a multiphase safety testing protocol, engaging with a broad network of healthcare systems and professionals to validate its AI agents.

However, the company’s funding achievements in roughly two years of operations speak to the potential trajectory of the healthcare AI space. With a focus on safety, strategic partnerships, and a clear vision for the application of generative AI in healthcare, Munjal Shah’s Hippocratic AI could be poised to play a central role as AI continues to be one of the most discussed and well-financed areas of health care innovation.

 

Published By: Aize Perez

(Ambassador)

This article features branded content from a third party. Opinions in this article do not reflect the opinions and beliefs of San Francisco Post.