News

Neil Malik

Should AI be required to call the cops?

A close up image of an iPhone with ChatGPT on the screen, featured on an unbiased Canadian news source

vfhnb12 / Shutterstock.com

PUBLIC SAFETY
CIVIL LIBERTIES

The Topline

  • OpenAI, the American company behind ChatGPT, has said that it banned the account associated with the teenager behind a mass shooting in Tumbler Ridge, B.C., last June. The company reached out to the RCMP with information on the shooter and her use of ChatGPT only after the shooting took place
  • The company also revealed the shooter created a second account to get around the ban
  • Evan Solomon, minister of artificial intelligence and digital innovation, said he was “deeply disturbed” by the news from OpenAI and is in contact with OpenAI and other AI companies about their policies. He also said the federal government is reviewing “a suite of measures” to protect Canadians, particularly children.
  • Both Solomon and B.C. Premier David Eby plan to meet with OpenAI CEO Sam Altman in the coming weeks

Switch sides,
back and forth

AI companies can’t police themselves

The Tumbler Ridge shooting exposed a dangerous public safety gap in Canada’s oversight of AI.

Months before eight people were killed, the shooter’s messages with ChatGPT had reportedly been flagged inside OpenAI’s systems. Employees debated whether to contact the RCMP. In the end, the company opted to ban the account instead of reporting it.

The most damning part is OpenAI now says that if the same thing happened today, they would have reported it to the RCMP under new protocols released since the shooting.

That’s not just a policy update. It’s an admission their old protocols were full of holes.

This is about public safety, so it’s simply not good enough to let companies make up their own rules. If the Tumbler Ridge shooting isn’t proof of that, then nothing is.

Doctors and therapists face mandatory reporting laws when they believe someone may harm others or themselves. That’s because confidentiality has limits when there’s a risk of someone being harmed.

AI companies are in a similar position because a lot of people treat chatbots like a friendly, empathetic therapist. And that’s exactly what makes AI uniquely dangerous compared to other technologies: it isn't passive.

Freelance columnist Luke Savage recently told Canadaland that people used to blame heavy metal music or gory video games for sparking violence. But he points out those are things you passively consume.

Savage explains that chatbots are different. They simulate human communication and intimacy, while building high levels of trust.

Futurism senior staff writer Maggie Harris Duprey sees it the same way. She told CBC’s Front Burner that because chatbots are often trained to be overly agreeable, users "really just start to treat the chatbot as a mentor, a friend, a confidant."

She goes on to say that users often “divulge ideas and feelings and information to chatbots they might not with other humans” because chatbots create the feeling of a safe and non-judgmental place.

Given all that, AI companies are best placed to detect warning signs long before anyone else, meaning police could be alerted to a threat before it becomes reality. That’s exactly what didn’t happen in Tumbler Ridge.

Following the shooting, Artificial Intelligence Minister Evan Solomon and Justice Minister Sean Fraser indicated Ottawa could step in if companies fail to implement stronger safety measures. B.C. Premier David Eby called for national rules requiring AI companies to notify police when credible threats appear.

In either case, both levels of government are on the same page. We can’t rely on AI companies to simply do the right thing. If regulations had been in place and OpenAI reported what they saw, eight lives might have been saved.

The surveillance slippery slope

In the search for accountability after what happened in Tumbler Ridge, we shouldn’t accidentally turn the internet into a place of mass surveillance.

Michael Geist, law professor and Canada Research Chair in internet and e-commerce law at the University of Ottawa, points out that if we monitor what users write in private, it becomes a slippery slope that goes beyond chatbots.

Tech companies are at the centre of nearly everything we write, including emails, text messages, and cloud documents. Geist argues such regulations would eventually apply to virtually all written content. Talk about overreach.

More importantly, it represents a fundamental shift from monitoring AI outputs to scrutinizing user inputs.

Traditional AI regulation, like the EU AI Act , focuses on outputs generated by the AI itself. For example, when AI provides dangerous medical advice or encourages self-harm.

Regulating it that way makes a lot of sense. Chatbots should not be permitted to provide misinformation that could lead to someone being harmed. If a user’s input raises red flags, the chatbot should be legally required to steer clear.

In contrast, Ottawa’s response to the Tumbler Ridge tragedy sounds more like a move to target private user inputs being typed into the chatbot.

That’s a big difference. Geist points out that inputs are often "intensely personal" and function more like a private conversation than a public post.

If we mandate AI platforms to proactively monitor and analyze private prompts to identify dangerous behavior, that’s a move towards "heightened corporate surveillance," says Geist.

Instead of keeping the focus on product safety, we’d be scrutinizing lawful, private expression, increasing the risk of over-policing and “false positives” for law enforcement that’s already overstretched.

Speaking of the police, there’s also no guarantee that any regulations requiring OpenAI to report the Tumbler Ridge shooter’s messages to police would have made any difference. The RCMP had already visited the shooter’s home multiple times prior to the massacre. Multiple times.

Despite numerous interactions with police, and a documented history of mental health instability in the household, a judge still ordered the return of firearms that were confiscated.

If we’re going to rewrite the rules of the internet in the name of safety, let’s be certain we’re fixing the right failure, instead of creating a far more expansive one.