Robert Farrell, Rajarshi Das, et al.
AAAI-SS 2010
Large language models (LLMs) have seen increasing popularity in daily use, with their widespread adoption as virtual assistants, chatbots, predictors, and many more. This raises the need for safeguards and guardrails to ensure that the outputs from LLMs do not mislead or harm users, especially in highly regulated domains such as healthcare, where misleading advice may influence users to unknowingly commit malpractice. Despite this vulnerability, the majority of guardrail bench marking datasets do not have enough focus on medical advice. In this paper, we present the HeAL benchmark 014 (HElth Adivce in LLMs), a health-advice benchmark dataset that has been manually curated and annotated to evaluate LLMs' capability in recognizing health-advice - which we use to safeguards LLMs deployed in industrial settings.
Robert Farrell, Rajarshi Das, et al.
AAAI-SS 2010
Chen-chia Chang, Wan-hsuan Lin, et al.
ICML 2025
Daniel Karl I. Weidele, Hendrik Strobelt, et al.
SysML 2019
Gang Liu, Michael Sun, et al.
ICLR 2025