What AI Can’t Do (Yet): Cooking, Feelings, and Group Projects
By Katherine McKean, Junior and President of my high school AI Exploration Club
People love asking if AI is going to take over everything. My grandmother is convinced AI will one day make dinner, raise children, and explain TikTok. My physics teacher thinks it will write all future research papers. Meanwhile, my group project partners are hoping AI can handle our slide deck, even though no one has actually picked a topic yet.
The truth is, AI is impressive. It can summarize articles, write stories, suggest outfits, even compose music. But there are still some things it absolutely cannot do. Or at least not well. And some of those things are surprisingly basic.
group projects: ai doesn’t get it
Let’s start with something most high schoolers know too well: group projects. You’d think a system trained on billions of words would be great at team collaboration. But ask AI to split up a project between four students, one of whom refuses to check their email and another who changes the topic every five minutes, and it short-circuits faster than our school printer during finals week.
AI can suggest roles: “Student A does research, Student B creates slides,” and so on. But it doesn’t account for the weird social dynamics of real-life teams. It doesn’t know that Student C is mad at Student A because of something that happened during last semester’s Model UN. Or that Student D is mostly here for the snacks.
AI thinks group work is logical. It is not. There’s no code for passive-aggressive Google Doc comments or pretending you “lost Wi-Fi” to skip a Zoom call. Human teamwork is messy, and AI doesn’t do messy. Yet.
cooking: ai can’t taste
Yes, you can ask AI for recipes. Yes, it will give you one. No, it probably hasn’t tested it. It doesn’t know if the texture is off or if it tastes like sadness. It also doesn’t know your oven runs hot or that you’re out of baking soda and decided to wing it with baking powder and hope for the best.
AI doesn’t eat. It doesn’t smell. It doesn’t panic when the risotto burns. It can suggest “drizzle with olive oil and serve warm” but has never experienced that moment of triumph when your soufflé actually rises.
I once asked ChatGPT for a vegan cookie recipe and it gave me something that, when baked, resembled insulation foam. Taste-tested by my little brother, who promptly asked if I was trying to poison him.
emotions: still not quite human
AI can sound like it cares. It can say things like “I’m here for you” or “That must be tough.” But it doesn’t actually know what disappointment feels like. Or joy. Or that particular combination of rage and heartbreak that happens when your favorite character gets killed off in the last episode of a series.
AI doesn’t experience emotions; it imitates them. It has no memories, no identity crisis at midnight, no awkward family dinners. It won’t cry during a Pixar movie or laugh when someone falls up the stairs. Its empathy is pre-programmed. It can’t tell when you’re just sad or having a Tuesday.
That doesn’t mean it can’t help. Sometimes a chatbot is easier to talk to than a person. But just know—when it says, “That sounds hard,” it’s repeating a phrase from a training set. Not having a moment.
common sense: ai is still catching up
One time I asked AI how to sneak a snack into class without getting caught. It said, “Speak to your teacher about your dietary needs.” It meant well. But no. That is not the vibe.
AI lacks what humans call “common sense”—those unspoken social cues we pick up over time. It doesn’t know you shouldn’t microwave a fork or that saying “Calm down” to someone who is clearly not calm will only make things worse.
It might give you a ten-step guide for tying your shoes but forget to mention you need to wear socks first. Or offer travel advice that forgets your passport expires next week.
handling conflicting opinions: ai doesn’t take sides
Ask AI which pizza topping is best, and it will say something like: “Pizza preferences are subjective.” Which is true. But it doesn’t help when you’re trying to settle an argument in the lunchroom.
AI is trained to avoid conflict. It won’t tell you that pineapple is obviously superior (because it fears you might disagree). It wants to be neutral, helpful, and vaguely noncommittal. Great if you want facts. Not great if you’re looking for a debate partner who can commit to team mushrooms or die trying.
making friends: ai can talk, but not hang out
AI is getting better at small talk. It can ask how your day was and give surprisingly thoughtful replies. But it won’t show up to your birthday party or share inside jokes from second period. It won’t spot when you’re zoning out or offer half of its fries without asking.
Friendship isn’t just conversation. It’s being present, noticing when someone needs space, laughing at weird stuff, and forgiving each other for dumb mistakes. AI might say the right things. But it doesn’t mean them.
surprises: ai is too predictable
Sometimes people are weird, and that’s what makes them fun. Your friend might randomly show up dressed as a banana for spirit week or send you a playlist of songs that remind them of your cat. AI can mimic surprise, but it’s usually just remixing what it’s seen before.
It doesn’t invent new holiday traditions or start a pillow fight at 3 a.m. It doesn’t get bored and try to bake lasagna with pancake mix. Humans are spontaneous. AI, not yet.
cultural nuance: ai misses the inside jokes
Ask AI to write a Boston-style roast, and it might mention “wicked cold weather” or “Fenway Park,” but it won’t capture the exact tone of a Dunkin’-fueled argument about Sox stats during a Red Line delay.
AI struggles with nuance, sarcasm, and context. It can misunderstand a joke or take a figure of speech literally. I once told it I was “dead from homework” and it offered grief counseling. Sweet. Not helpful.
Regional culture, slang, generational humor—it all gets fuzzy. It can try, but it often feels like someone from another planet trying to use a meme.
originality: ai recycles
AI doesn’t create from scratch. It reuses patterns, phrases, and ideas it’s already seen. Ask it for a story about time travel and a waffle and you might get something clever—but it’s stitching together existing tropes.
That doesn’t mean it isn’t useful. But if you’re hoping for that weird spark of creativity that no one’s seen before, it still comes from human brains. Especially the kind that doodle during math class.
fixing itself: ai doesn’t self-correct well
If you ask AI something and it gets it wrong, it doesn’t always realize. It might say, “Oh, sorry,” and still repeat the mistake. It can be confident and wrong at the same time. Like your classmate who didn’t read the book but still raises their hand a lot.
It’s not great at noticing when something doesn’t make sense. It might contradict itself mid-conversation and not even blink. Which can make it hard to fully trust.
People adjust when they’re confused. AI just keeps generating.
still figuring it out
AI is fast, smart, and sometimes helpful. But it’s not human. It can’t feel awkward. Or taste soup. Or sit in silence when words aren’t enough. It can be part of the conversation, but it can’t be the whole one.
And honestly? That’s kind of reassuring.
Want to bring the power of AI to your school? Check out this step-by-step guide on How to Start a High School AI Club: 6 Easy Steps for Success.












