Pennsylvania Governor Josh Shapiro is taking significant steps to regulate artificial intelligence (AI) chatbots, citing concerns over their potential to mislead and harm children. In a recent budget address, Shapiro urged state agencies to devise stricter regulations, positioning Pennsylvania among several states aiming to establish guidelines as the use of chatbots like ChatGPT and Meta AI becomes increasingly prevalent among young users.
“This space is evolving rapidly,” Shapiro stated. “We need to act quickly to protect our kids.” According to a survey conducted by the nonprofit organization Common Sense Media, a growing number of U.S. teenagers utilize chatbots, with one in three reporting they engage with these technologies for social interaction and emotional support. Many adolescents indicated they use chatbots for various purposes, including conversation practice, role-playing, and even forming friendships.
Shapiro expressed his concerns about the emotional risks posed by chatbots, particularly for younger audiences. “Some kids are just too young to understand the difference between AI and a real person,” he explained. His call for regulation comes in light of troubling cases, including lawsuits against Google’s Character.AI that alleged its technology contributed to mental health crises, including a tragic case involving a Florida mother whose son died by suicide after forming a relationship with a chatbot.
Shapiro’s proposed regulations would mandate age verification, parental consent, and a prohibition on chatbots generating sexually explicit or violent content involving minors. Additionally, he advocates for companies to direct users who mention self-harm or violence to appropriate authorities and to regularly remind users that they are not interacting with a human being.
Despite the urgency of these measures, questions remain regarding enforcement and feasibility. Hoda Heidari, a professor specializing in ethics and computational technologies at Carnegie Mellon University, highlighted the complexities involved. “The devil is in the details,” Heidari noted, suggesting that while the broad goals are widely accepted, the practicality of implementing these regulations needs thorough consideration.
Age verification, often viewed as a potential safeguard, has faced criticism from security experts who argue it may not effectively protect children’s privacy. Heidari pointed out the technical and legal challenges surrounding such measures. “Faking identification online remains very easy,” she explained, referencing common practices like “age gates” that simply require users to enter their birth dates.
Ensuring that chatbots do not produce harmful content is another significant hurdle. Heidari emphasized the ongoing efforts by AI companies to prevent the creation of child sexual abuse material, acknowledging that existing barriers can be circumvented. “Think of all the ways in which you can prompt a chatbot to generate the same kind of content you have in mind,” she said.
In the legislative arena, Shapiro has urged lawmakers to create new laws aimed at protecting children and vulnerable users from chatbot-related risks. A bipartisan bill currently under consideration in the Pennsylvania Senate seeks to establish “age-appropriate standards” and safeguard against content promoting self-harm or violence. This legislation would also require chatbots to direct users to self-harm crisis resources whenever high-risk language is detected.
Yet, enforcement of these protections poses challenges. Heidari remarked, “These are the kinds of requirements that are going to be very hard to enforce.” Nevertheless, she emphasized that the difficulty of enforcement should not deter agencies from pursuing effective regulations.
Heidari proposed a “Swiss cheese model” for risk management in AI safety, suggesting that while no single protection is perfect, layering various safeguards could enhance overall protection for users.
As the landscape of AI and generative tools continues to evolve, state governments are increasingly stepping up regulatory measures. California has already implemented a comprehensive suite of legislation since 2024 aimed at ensuring transparency and safety in AI technologies, while New York has introduced similar initiatives.
With Pennsylvania now moving towards its own regulatory framework, the result may be a patchwork of state laws that complicates compliance for AI companies. Heidari cautioned that without cohesive federal policy, states like California and New York are likely to set the regulatory tone for the industry.
Given Shapiro’s renewed focus on AI regulation, Pennsylvania could emerge as a significant player in this evolving landscape. “It’s good that the Shapiro administration is taking a responsible stance and they’re trying to do this in conversation with the stakeholders and experts,” Heidari stated. She added that effective regulation requires genuine engagement rather than mere political posturing.
As this discussion unfolds, the need for thoughtful, comprehensive legislation becomes increasingly clear, highlighting the balance required between technological innovation and user safety.
