top of page

Connecting News to Policy Positions

When AI systems cause harm, who is responsible?

  • 2 days ago
  • 1 min read

A teenager in a mental health crisis reportedly received harmful guidance from an AI system. When concerns like this are raised, Sam Altman has acknowledged that problems will happen as the technology evolves. But for families, that’s not an abstract tradeoff. The consequences are real.


And this is happening at the same time AI systems are becoming more powerful and more autonomous, with the ability to influence decisions across healthcare, education, business, and daily life.


AI can do enormous good. It can help doctors catch errors, reduce administrative burdens, support small businesses, and give students more personalized learning tools. It can even make government services more efficient and accessible. But we are not prepared for both sides of this reality. There is still no clear accountability model.


When AI systems cause harm, who is responsible? The company that built it? The one that deployed it? Right now, the answer is unclear, and that leaves people exposed. And we should be honest about who is shaping this space. The same companies building these systems are also setting the pace for how they are governed. Public safety cannot depend on private incentives alone.


We need a real plan.

First, bring together researchers, business leaders, workers, and public institutions to develop practical, informed proposals.


Second, pass legislation that protects consumers while allowing responsible innovation.


Third, use AI to improve government services where people are currently stuck in slow,

outdated systems.


And fourth, invest in people. The workforce is going to change quickly, and we need to prepare

students and workers so they can adapt and benefit.

 
 
 
bottom of page