Utah Targets Mental Health Chatbots As States Scrutinize AI
Utah’s recently enacted legislation, H.B. 452, specifically regulating mental health chatbots, reflects growing concerns regarding privacy and advertising in virtual therapy settings. As artificial intelligence solutions increasingly attract young individuals seeking mental health support, the law addresses the risks associated with AI-based chatbots that resemble licensed therapists or claim to support mental wellness.
Sheppard Mullin Healthcare partner Carolyn Metnick observed that health chatbots and apps have grown increasingly common. “Tech companies appear to be more dominant in this space because the service potentially competes with licensed mental health care providers (although the mental health chatbots are not providing licensed mental health care so it is arguably different)," she said. “The public policy behind the Utah law fits with themes that we are seeing in other state AI laws relating to disclosure and transparency." According to Metnick, Utah has been a leader in the regulation of AI, along with California and Colorado.
Under the statute, chatbots powered by generative AI must clearly inform users that they are not interacting with a real person, but rather with an automated system. The legislation also regulates the use of personal health data, generally prohibiting the sale or sharing of a user’s identifiable health information, unless the user opts to share it with a healthcare provider or health plan. Strict advertising rules are included, barring the use of user conversations for personalized ad targeting, and requiring all advertisements to be clearly marked as such, along with declaring any business affiliation with the chatbot. The law applies to chatbots that simulate the therapeutic relationship, while excluding programs that only offer scripted responses or serve to connect users to licensed human therapists.
Read the full article here. (A subscription is required)