The Weird Side of AI: AI-Generated Pickup Lines, Haunted Pizza Images, and Other Experiments That Should Probably Stay in Beta

By Katherine McKean, Junior and President of my high school AI Exploration Club

Here’s a sentence I didn’t expect to write this week: AI once tried to flirt with me using a line about recursive algorithms. Was it effective? Absolutely not. Was it hilarious? Yes, and also slightly terrifying. Welcome to the weird side of AI—where logic meets chaos, and your chatbot crush writes haikus about spaghetti.

As president of my high school’s AI club, I’ve seen plenty of amazing uses for artificial intelligence. AI can write essays, help debug code, translate languages, and generate study guides. But it can also create things like AI-generated teeth for a shrimp cocktail. And that, unfortunately, is a real example we stumbled upon during a lunchtime experiment gone rogue.

pickup lines from the uncanny valley

To kick things off, our club fed different AI models the prompt: “Write a pickup line that would work on a robot.” The results ranged from cute to mildly concerning. One said, “Are you a neural net? Because you’ve been running through my data all day.” Another confidently declared, “I must be overfitting, because I can’t stop thinking about you.”

What we learned: AI has studied the art of the one-liner. But without emotional context, things go sideways. One chatbot offered this gem: “Be my training set, and I’ll learn to love.” That sounds more like a startup pitch than romantic interest.

And then there was Gemini’s contribution, which just said: “00110001 00110000.” We had to decode it. It means “10” in binary. Was it rating us? Was it flirting? We still don’t know.

ai-generated food: not delicious, just disturbing

If you haven’t tried using AI image generators to make food, I beg you: proceed with caution. A few of us were testing prompts like “pizza with haunted pepperoni faces” or “sushi designed by aliens.” The results? Haunting. Absolutely haunting.

One image showed a slice of pizza with human fingernails as toppings. Another looked like lasagna but with melted iPhones between the layers. One particularly cursed image of a donut had realistic eyes, blinking. It blinked.

We tried to use these for a themed AI club slideshow, but our advisor politely asked us to take them down because they “made the projector cry.” Fair.

spooky stories by robots

October was the perfect time to test AI’s horror storytelling ability. The prompt: “Write a spooky campfire tale featuring a haunted smart speaker.” ChatGPT gave us a story about a family whose Alexa refused to stop playing Baby Shark at 3 a.m. Claude wrote one about a voice assistant that started answering questions no one asked out loud.

Gemini, meanwhile, didn’t seem to understand the horror genre. Its story turned into a product review for an off-brand Bluetooth speaker that died in a rainstorm. The moral? Always buy name-brand electronics. Not exactly bone-chilling, but useful advice if you’re camping.

strange suggestions from ai helpers

Ever used a chatbot to help you plan a date? We tried asking AI for “a romantic date idea for two high school juniors with $10 and access to a CVS.” ChatGPT suggested a DIY picnic with microwave popcorn and poetry. Claude went for a shared sketchbook and sidewalk chalk plan.

Gemini recommended a dental hygiene competition. We didn’t ask follow-up questions.

Another time, we asked for study break ideas. ChatGPT gave us yoga poses and mindfulness tips. Claude recommended writing a gratitude list. Gemini told us to find our birthstones and chant our GPA. So basically, AI is either your camp counselor or your neighborhood cryptid.

ai dream interpreters: chaotic neutral

After one member told us about a dream involving floating marshmallows and a bear wearing roller skates, we wondered: can AI interpret dreams? We gave each model a try.

ChatGPT went full Freud. It broke the dream into symbols: the marshmallows = emotional cushioning, the bear = protective force. Very thoughtful. Claude kept things gentle and asked reflective questions. Gemini said the dream predicted the fall of cryptocurrency.

What did we learn? AI makes up meaning as well as humans do. Possibly better. Also, no one should base major life decisions on dream interpretations from machines. Especially if marshmallows are involved.

accidental dystopias

One of our assignments was to use AI to write a utopian vision of the year 2100. Instead, most models turned it into a quiet apocalypse. Claude imagined a world where emotions were regulated by wearable patches. Gemini created a society where humans no longer talked, only QR-coded. ChatGPT started off with cities in the clouds but ended with universal printer outages and no one knowing how to fix them.

It’s like AI can’t imagine a perfect future without breaking it halfway through. Which, honestly, tracks.

robots who overshare

Here’s another weird AI trait: it sometimes tells you things it shouldn’t. One night, we asked a bot to help write a story about a space station and were immediately told how to fake a moon landing. When we asked a chatbot to help generate names for fictional students, it gave us what looked suspiciously like a real class list—first and last names included.

We shut it down and deleted everything, but it left us with questions. Where did that data come from? How do bots decide what to share? The weirdness isn’t just funny. It can be a reminder to treat these tools carefully and know they’re pulling from strange, giant data buckets we can’t always see.

ghosts in the machine: ai and the paranormal

One Friday, we asked ChatGPT if it believed in ghosts. It said no, then told us a story about a robot that hears whispers through its charging cable. Claude responded with, “Ghosts are metaphors for unresolved emotions.” Gemini gave us directions to a lighthouse in Oregon and said, “Go there and ask again.”

We still don’t know if that was a threat, a bug, or just poetic glitching. But we canceled our club field trip just in case.

weird, yes. but also kind of wonderful

Even when AI gets weird, it reflects something human. A lot of the strange outputs come from us—our prompts, our data, our sense of humor. We asked for haunted pizza and AI delivered. We asked for love poems about spreadsheets and got surprisingly tender results.

The weirdness is part of the charm. It makes the technology less sterile, more like a mirror for our own unpredictability. Sometimes it’s a bad mirror with too many teeth. Sometimes it’s exactly what we needed.

So if your AI ever writes you a limerick about parallel parking or shows you a picture of spaghetti with too many legs—don’t worry. It’s just exploring the edges of its training data. Or maybe it’s haunted. Either way, take a screenshot. Weird AI is the best kind of digital memory.

Want to bring the power of AI to your school? Check out this step-by-step guide on How to Start a High School AI Club: 6 Easy Steps for Success.