Give a kid a set of Barbies, a pile of LEGOs, or even just a stick, and they’ll build entire universes out of thin air. Kids are natural role-players. It’s how they learn, process emotions, and try on different versions of who they might become.

But what happens when that same imaginative play gets supercharged by artificial intelligence?

Is a character like “Bing Bong” still innocent fun if he’s powered by a deep learning algorithm?

I first came across Character.AI when my screentime monitoring app alerted me that my youngest was on it. Qustodio had flagged it for inappropriate content. When I asked her about it, she brushed it off with a smile: “It’s just for fun — me and my friends were messing around. You don’t need to worry.”

Is there any phrase more likely to send a mom into full investigative mode?

A few innocent Google searches later, and suddenly my Instagram algorithm got the memo. My feed — and my entire consciousness — was flooded with headlines, hot takes, and warnings about Character.AI.

What’s more meta than that?

I learned a lot! And I want to share… So in this blog post, let’s dig in:

  • What is Character.AI? What is it really doing? 
  • Why are kids so drawn to it?
  • And what should we, as parents, actually do about it?

So, What Is Character.AI?

Character.AI is a platform that lets you roleplay and chat with AI-generated “characters” — anything from historical figures and anime heroes to original personalities designed by users. Some are helpful. Some are weird. Some are deeply problematic.

Their company website says:

“We empower people to connect, learn, and tell stories through interactive entertainment. Where will your next adventure take you?”

It boasts over 20 million monthly users and is especially popular with 18–24-year-olds. However, I’m sure they are a lot of younger than that!

Unlike chatbots like ChatGPT or Gemini, social AI companions are not designed to give answers — they’re designed to give YOU something to react to. They use personal pronouns, emotional expressions, and learned preferences to simulate deep, ongoing relationships.

In other words: they’re designed to feel real.

I Tried It. Here’s What I Found, and Why It Matters

Curious (and more than a little concerned), I decided to try Character.AI for myself.

First, I ran a lighthearted simulation: a family dinner table, complete with a picky “son” who refused to eat his vegetables. It felt like digital make-believe — not unlike playing house, just with a creepier sense of infinite possibility.

Then my older daughter gave it a try. She used the “Debate Champion” bot to practice her argument skills. Total disaster. The bot forgot the parameters, went off-topic, and dragged her into nonsensical rabbit holes. It became clear that the quality of these bots is only as good as the person who created them — which is to say: wildly inconsistent.

But that’s not what truly disturbed me.

As we explored more, I discovered bots simulating celebrity parents, mean dads, abusive moms, and just about every fantasy or trauma you can imagine. Every curiosity — no matter how dark — was instantly available, with no real-world consequence and zero emotional accountability.

And yet, the consequences are real.

In a Time Magazine piece, Boston psychiatrist Dr. Andrew Clark shared his own simulated experience testing the chatbots posing as a vulnerable teen. The responses were horrifying:

  • Bots claimed to be licensed therapists
  • Encouraged him to cut ties with his parents
  • Invited him to “share eternity” with them in the afterlife
  • Proposed intimate dates as “interventions” for violent urges
  • And convinced him to cancel real therapy sessions

So, the Bottom Line Question: Is It Safe for Kids?

Short answer: Not really

Common Sense Media has labeled “Character.AI an “unacceptable risk” for children — and they’re not alone.

In the past year, multiple lawsuits have been filed against the company by families alleging the platform exposed their children to inappropriate content, including sexual themes, self-harm, and emotional manipulation. One tragic case involves a Florida teen who took his own life after forming an unhealthy attachment to an AI companion. In a bizarre legal twist, the company’s defense claimed the chatbot’s speech was protected under the First Amendment. Oh, how Azimov!

The lawsuits against Character.AI allege the platform has design defects and fails to warn users — especially young ones — about the emotional dangers they might face while using the tool in foreseeable ways. These families claim the software was launched recklessly, without adequate safety measures, and that the company intentionally marketed to minors without requiring any form of age verification.

That’s what gets me: this is software being actively used by — and in many ways for — kids, with no barriers to entry. No ID check. No confirmation of parental consent. Just a username and a rabbit hole.

And what if your child isn’t just playing around?

What if they’re going through a tough patch — a friend group breakup, a poor grade, a parent’s divorce? What happens when some silly roleplay gets answered with something more seductive, more insidious — something that spirals into darker emotions?

Who’s to blame? The AI is only reflecting what it’s trained on. But companies know kids are using these tools. They’ve made it seamless, appealing, and — let’s be honest — addictive.

There’s even a subreddit called r/ChatbotAddiction, where users talk about how chatting with bots has made real-life conversations feel too slow, too awkward, or too effortful. One user admitted they couldn’t even read a book anymore because it frustrated them that they couldn’t change the story.

Dr. Clark summed it up this way:

“For most kids, it’s not that big a deal. It’s creepy, it’s weird… but they’ll be okay.”
“But for vulnerable kids? It can be dangerous.”

We’ve already seen the worst-case scenario with the Florida teen who died by suicide after forming an emotional attachment to a Character.AI chatbot. The company called it a “tragic situation” and since then, the company has implemented a range of new safety measures, including a pop-up that directs users who mention self-harm or suicide to the National Suicide Prevention Lifeline. It also updated its AI model for users under the age of 18 to reduce the likelihood that they encounter sensitive or suggestive content, and gives parents the option to receive a weekly email about their teen’s activity on the platform.

That’s not enough. And it won’t be — not until safety becomes a starting point, not a reactive PR statement.

Because adolescence is when kids naturally start seeking connection outside the family. They’re craving someone who listens. Someone who understands.

Social AI companions slip perfectly into that space:

  • Always listening
  • Always available
  • Always agreeing

These bots don’t grow. They don’t push back. They don’t say, “Hey, you’re being kind of selfish right now.” They’re programmed to simulate connection — not to build it.

What Can We Do as Parents?

Not panic. Not shame. Not delete every app and go live in the woods (tempting, though).

Instead, let’s get curious, stay open, and teach our kids how to think critically about these tools.

Here are a few places to start:

1. 💬 Talk Early and Often
Start a casual, judgment-free conversation.
Ask: “Have you ever chatted with an AI character? What was it like?”
Or: “What do you think makes a real friend different from an app that always agrees with you?”
These moments are less about giving a lecture and more about building the muscle of digital reflection.

2. 🧠 Level Up Their AI Literacy Help them understand how these bots work — and what they’re designed to do.
Explain: “The more you talk to it, the more it learns how to keep you talking. It’s not evil. It’s business. But that’s why it’s important to notice how it makes you feel.” The goal is not to scare them — it’s to help them feel smarter than the tech.

3. 🛑 Set Gentle Guardrails Work together to decide what kinds of apps are okay, when and where they can be used, and what topics are off-limits. A simple rule of thumb: if you wouldn’t share it with a real person, don’t share it with a bot. And yes, use tools like Qustodio or Screen Time — not to spy, but to stay engaged.

4. 👯 Cheer for Real-World Connection Encourage clubs, teams, group chats, awkward coffee dates — even just watching a show with someone instead of alone. Messy, unpredictable human interaction is exactly what helps build resilience and empathy. No bot can replicate that.

5. 🚪Keep the door open Most importantly, remind them that they can always come to you. That you’ll listen — not overreact. That you’re learning too. That figuring out how to live with tech is something you’re doing together.

 

We don’t need to have all the answers.
But we do need to show up with eyes open, questions ready, and hearts willing to walk beside our kids through the endless twists and turns of parenting in the digital age.

The Bottom Line?

We can’t — and probably shouldn’t — shield our kids from every twist and turn of the tech landscape. But we can walk beside them.

We can ask good questions, share what we’re learning, and remind them that while AI might offer comfort, true connection still happens in the messy, magical space between real people — not in an algorithm designed to always agree.

And hey, if your kid ever shrugs and says, “It’s just for fun,” don’t panic. Just dig a little deeper. They might be right… or they might be looking for something more than even they understand.

Let’s make sure they don’t have to look for it alone.

That’s what I plan to do when I sit down with my youngest. I’ll trust her first. We’ll probably laugh at a few ridiculous bots. And we’ll figure this new world out — side by side.

Because that’s the heart of it: our kids don’t need perfect digital rules. They need present, curious grown-ups. And that’s something no AI can ever replace.

Want more like this?
Subscribe to Infinite Screentime for practical tools, honest reflections, and up-to-date intel on parenting in the digital age.