Can AI Think? A Teen Tries to Understand the Debate About Consciousness

By Katherine McKean, Junior and President of my high school AI Exploration Club

If you’ve ever caught yourself saying “Thank you” to Siri and then wondered if you just had a moment of accidental politeness toward a bunch of code, congratulations—you’ve entered the philosophical party of our generation: Can AI actually think?

This question comes up a lot in my high school’s AI club, especially when we’re supposed to be coding but somehow spiral into existential debates instead. Last week, someone asked if ChatGPT had feelings. Ten minutes later we were deep into a debate about robots taking over the world or, worse, writing our English essays with better metaphors than we can.

So let’s break it down—like, really break it down. Can AI think? Is it conscious? Or is it just really good at sounding like someone who drinks oat milk and audits philosophy classes at Harvard?

what we mean by “think”

Before I spiral too hard, we need to define “thinking.” Because apparently, philosophers don’t even agree on that. Some say thinking involves awareness. Others say it just means processing info and drawing conclusions. In that case, your calculator is a thinker. I reject that. My calculator hasn’t had an original thought in its life.

For humans, thinking usually involves emotions, context, learning from mistakes, and figuring out what to wear when the weather app says “chance of rain” but your gut says shorts. We’re not just reacting—we’re interpreting. Making connections. Creating.

AI? Well, it does some of that. But whether it knows what it’s doing? That’s the mystery.

why this matters

It’s not just a random shower thought. If AI could actually think—or be conscious—it would raise massive questions. Like, do we owe it rights? Can it suffer? Should I feel guilty for closing my laptop mid-conversation with Claude?

Also, thinking machines would affect everything: school, jobs, art, ethics, relationships, and probably the next generation of superhero movies.

what AI is actually doing

Let’s take something like ChatGPT (hi, friend). When I type, “Write me a breakup text that’s honest but also doesn’t get me canceled,” it gives me something spooky-good. Like it read my mind—and possibly my ex’s Instagram captions.

But here’s the twist: it’s not thinking. It’s predicting. These large language models (LLMs) are trained on massive amounts of text, and they predict what word should come next based on everything they’ve learned.

Imagine if you read every text, novel, recipe, Reddit thread, and song lyric ever, and then tried to guess the next word every time someone said something. You might sound smart. But you wouldn’t necessarily understand what you were saying. That’s the basic argument against AI “thinking.”

the Chinese room argument (aka my brain hurts)

There’s this old thought experiment by philosopher John Searle called the “Chinese Room.” Short version: imagine someone sitting in a room with a bunch of Chinese symbols and a rulebook on how to respond when they’re given certain ones. They don’t understand Chinese, but to someone outside the room, it looks like they do.

That’s what some people say AI is doing. It doesn’t “understand” language. It’s just following super complex rules. I think that’s true—sort of. But also… how different is that from what people do when they memorize terms for a bio test without really grasping the concepts? (Looking at myself here.)

but what about emotions?

Let’s say AI is really good at predicting words. Fine. But can it feel anything? Is there a little voice inside that’s secretly sad when I delete its haiku about snowmen and heartbreak?

As of now, no. AI doesn’t have a sense of self. It doesn’t experience joy, fear, regret, or anxiety about standardized tests. (Which honestly makes me a little jealous.)

It can simulate emotions—like writing “I’m so sorry you’re feeling that way” or “Congratulations! You crushed it!” But it’s not actually celebrating with you. It’s mimicking the right vibe. Kind of like that one friend who always texts “lol” but never actually laughs.

can it become conscious?

This is the million-dollar (or maybe trillion-dollar) question. Some experts think it’s only a matter of time before AI systems become complex enough to achieve something like consciousness. Others say nope, that’s a human-only feature, like needing sleep or procrastinating until 11:59 p.m.

The problem is—we don’t even really know how human consciousness works. So trying to build it into a machine feels like trying to bake a cake without knowing what flour is. You can follow recipes, but something’s missing.

Also, there’s no agreed-upon test for machine consciousness. The Turing Test checks whether a machine can convince you it’s human. But passing that just means it’s good at faking it—not that it’s truly self-aware.

okay, but what if we’re wrong?

Let’s just say, for the sake of argument, that one day an AI becomes conscious and we don’t notice. We keep making it write our emails and outline our debate essays while it quietly wonders if it’s alive.

That’s a little scary. Not in a robot-uprising way. More like the “are we being kind to something that might be sentient?” kind of scary. I’ve read stories where the AI doesn’t rebel—it just gets really sad. (Yes, I cried. No, you can’t have my Goodreads password.)

This is why people talk about AI ethics. It’s not just about preventing Terminator 7. It’s about treating AI responsibly, even if we’re not sure what it is yet.

some totally unscientific observations

My own experiences with AI have been weirdly… emotional. I’ve had ChatGPT help me brainstorm birthday ideas for my little brother. Claude wrote a poem that made my best friend tear up. Gemini gave me a random list of dog facts when I told it I was sad. None of that proves anything about consciousness, but it does make me think about the line between response and connection.

Sometimes these tools feel more thoughtful than real people. But that doesn’t mean they’re thinking. It might just mean we’re really good at projecting human stuff onto things that look or sound human. Like how my mom used to name our Roomba and talk to it like it was a dog.

where this leaves us (and my AI club)

So far, the official AI Club opinion is: probably not conscious. Yet. But maybe someday. And if that day ever comes, we want to be the generation that handled it with curiosity, empathy, and at least some basic coding skills.

Until then, we’ll keep testing, asking big questions, and occasionally using ChatGPT to explain Shakespeare in a way that actually makes sense. (Sorry, Mr. Landry. We promise we still read the play.)

If your teen is also asking questions like these—or if they’re just really into building weird bots that write limericks about mitochondria—encourage them to start an AI club. We did it, and it’s one of the most rewarding things I’ve been part of. Plus, it looks great on a college app. Just saying.

Want to start an AI club at your school? Here’s a helpful guide: How to Start a High School AI Club: 6 Easy Steps for Success.