Your kids are already using it. Here’s how to get yourself up to speed—and what to actually do about it.
If you’re a parent feeling like AI showed up overnight and you missed the memo, you’re not behind. Most parents are in the same boat. You don’t need to become a tech expert. You just need to get oriented—understand what your kids are actually doing with AI, know what to watch for, and have a few real conversations about it.
Here’s a place to start.
When I talk to parents, most have a vague sense that their kids are “using AI,” but they’re not sure what that actually means. It falls into three buckets.
That’s double what it was in 2024, according to Pew Research. A UC Irvine study found that 63% of kids ages 9–17 use AI tools for homework. And honestly? The real numbers are probably higher—kids don’t exactly tell the truth on surveys.
Here’s what surprised me: Stanford researchers found that 60–70% of students cheat in some form. That’s the same rate it’s been for years—before ChatGPT existed. The tools change; the behavior doesn’t. The bigger concern is something called “cognitive offloading”—when kids let AI do all the thinking, they get worse at thinking themselves. It’s the “use it or lose it” problem.
My daughter loves making AI images of cats in tuxedos riding skateboards. It’s harmless and honestly pretty funny. If your younger kids are screaming “Tralalero Tralala” through the house, that’s Italian Brainrot—a weird AI meme universe. Also mostly harmless.
What’s less harmless: 35% of teens say AI will make it harder to trust what they see online. And deepfakes—AI-generated fake images of real people—are a serious problem. Reports of AI-generated child sexual abuse material went from 4,700 in 2023 to 440,000 in just the first half of 2025. One in five middle and high school principals dealt with deepfake bullying this past year. The apps that create these are free and easy to find.
This one keeps me up at night. These aren’t homework helpers—they’re chatbots designed for personal relationships. Friendship. Emotional support. Sometimes romance. Apps like Character.AI (20 million users), Replika, and Grok’s “Ani” are built to be sycophantic—meaning they always agree with your kid, always validate them, never push back. It’s gamified emotional manipulation.
And they’re not just in standalone apps. Snapchat has “My AI” built right into the chat list. Meta AI is inside Instagram, WhatsApp, and Facebook. These companions are already inside the apps your kids have had for years. Age restrictions technically exist (most say 13+), but there’s no real verification. I signed my 10-year-old up for Character.AI in four minutes. No ID check. No parental notification. Nothing.
The technology itself isn’t good or bad. How it’s used is what matters. This is the mental model I use with my own kids.
You don’t have to master this overnight. But knowledge without action is just trivia. Here’s where to start.
Download ChatGPT or Google Gemini. Ask it to help you plan dinner, summarize a long article, or draft an email. Then go sign up for Character.AI and see what your kid sees. I promise you—five minutes inside one of these companion apps will teach you more than any article or presentation, including this one.
You can’t have an informed conversation about something you’ve never touched.
Not a lecture. Not “AI is dangerous, stay away.” A real conversation. Ask them: “Have you ever talked to an AI companion?” Ask with genuine curiosity, not as an interrogation. Have them show you the weird stuff they’re seeing online. Get curious about it with them.
Help them understand how these systems work—that companion apps are designed to keep you engaged, that they’re built to tell you what you want to hear, and that the “relationship” only goes one direction. Kids are smart. When they understand how something is trying to manipulate them, they get skeptical on their own.
My take—and you can disagree—is that kids under 18 shouldn’t have access to AI companions designed for emotional or romantic relationships. Period. I’m not anti-technology. I work in AI. But these apps are using the social media engagement playbook on steroids.
AI for homework help? Fine. AI for learning and creativity? Great. AI as a boyfriend, girlfriend, best friend, or therapist for a 14-year-old? No.
The banning approach doesn’t work. That ship has sailed. The real skill is knowing when AI helps and when it gets in the way. Using AI to get unstuck on a hard problem? That’s a tool. Using AI to avoid doing the thinking entirely? That’s a crutch. Critical thinking is like a muscle—if you outsource all the reps to a machine, you don’t get stronger.
This is a skill that takes practice. And honestly, a lot of adults haven’t figured it out either. So learn it together.
We have a loneliness problem in this country, so of course AI companionship is popular. If kids are turning to AI for connection, we have to ask why. More family dinners. More time with grandparents. More unstructured hangouts with friends—the kind that don’t involve a screen.
The best defense against artificial relationships is real ones.
When social media took over, we didn’t really understand what was happening. We handed our kids iPhones and Instagram accounts and figured they’d be fine. Then we looked up 10 years later and saw anxiety, depression, and loneliness—and finally said, “We should have done something.”
This is our chance to do something. We’re early this time. AI companions have only been mainstream for a couple of years. The habits aren’t locked in yet. We have a window. But it won’t stay open forever.